Start the course : 20' to 45'

Site: Plateforme de formations ouvertes de l'université Claude Bernard Lyon1
Cours: AI in an academic setting
Livre: Start the course : 20' to 45'
Imprimé par: Visiteur anonyme
Date: lundi 8 décembre 2025, 01:08

1. The several types of AI

There are several types of AI. Amongst the most popular ones are the generative AIs which are designed to generate content (text, images, videos,…).

Before finding out the characteristics of various AIs, let’s take a test!



 

💡 Did you know? AI is developing to such an extent that some sites list AIs almost exhaustively, according to need.

 
Page 1/5
 
Table of contents



2. How to prompt

A prompt is a sequence of instructions written in natural language.

A prompt is particularly useful for general generative AIs. Some academic AIs use keywords prompts (Semantic Scholar for example), while others use natural language prompts.

The megaprompt

An effective prompt needs to be precise and specific, so as to provide comprehensive guidance for the AI. In particular, it must contain the following elements:

  • A role (“you are...”, “act like...”) and/or context
  • A recipient and/or an objective
  • A format
  • Constraints
  • A style

Example of megaprompt:

 You are a general practitioner. You work in a clinic that specializes in the treating the elderly. You have to write a medical report on the treatment of a geriatric patient who has fractured the neck of his collarbone. The report is to be submitted to the clinic's medical committee. The goal is to receive approval to implement a specific treatment protocol. Write a structured report with an introduction, a methodology, results, a discussion and a conclusion.

Constraints:

  • Use precise and rigorous scientific language.
  • Keep to a maximum length of 10 pages.
  • Include graphs and tables to illustrate data.
  • Cite all sources according to APA format. 

Present results clearly and concisely.

After the first answer, you can refine the AI's proposals by being more specific on certain points. For example, you can ask it to expand on a particular aspect with a new prompt: “expand on this point in your report”, “give an example on this point”, “use a more scientific vocabulary/a more pedagogical tone/...”, “rework the report by integrating such and such data”, and so on.

Multiprompting

Multiprompting is asking AI for a more precise answer by interrogating it several times in a row.

 

Query 1: “What are the biological mechanisms involved in Parkinson's disease?”

Query 2: “What molecules are currently in development for the treatment of Parkinson's disease?”

Query 3: “What are the challenges and opportunities in the development of new drugs for Parkinson's disease?”

Query 4: "What are the results of recent clinical trials on new treatments for Parkinson's disease?

 

Reverse prompting

Reverse prompting is asking the AI for advice on how to write a better prompt.

 

You’re a prompt engineering specialist. Take a critical look at my prompt. How would you write an effective prompt?/What would you improve about this prompt? Here is my example:…

 

📢 Questions or tips about prompting? Join our collaborative forum!

Page 2/5

Table of contents

3. AI and documentary research

 
 




👉To go further : 

 
NotebookLM, a tool for redaction:

To avoid blank page syndrome when you start writing, you can use NotebookLM. Download the PDFs of your sources, import them into NotebookLM, then start asking questions. NotebookLM lets you add up to 50 sources, which can be:

  • PDFs
  • copied and pasted text
  • website URLs
  • Youtube videos.

The AI generates an answer based solely on your sources. This can reduce the risk of hallucinations. Always be wary of the quality of the answers. A deep analysis does not replace actually reading the material!

The video below shows how an AI like Perplexity can complement NotebookLM to find sources and synthesize a corpus defined by you.


 

 

 

 

Page 3/5

 

Table of contents

4. AI and scientific integrity

How to cite an AI?

 

AI cannot be listed as a co-author in a scientific article.

For example, here’s what the review Nature says about it:

« Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs.  Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript. The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term "AI assisted copy editing" as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work. »

Artificial Intelligence (AI) | Nature Portfolio. [read on the 06/03/2025].

 

👉 Ask your teachers to know if you are allowed to use AI. To be transparent, you can cite its use, following the same bibliographic style you would use for the rest of the document (in the methodology section, the text, the footnotes, the bibliography).

For example, you can reference in an appended table all the times you have used AI, listing all the prompts and re-specifying which parts of the document are concerned. As GAIs (generative AIs) don't offer a permanent link, you can be more transparent by copying the results of the conversations. On the other hand, if you've used a prompt and someone prompts an AI again identically, they will get an answer that isn't strictly identical. In short, the results of a prompt are not reproducible, because GAIs are designed to generate original content.

👉Here is a list of recommendations to cite AI use:

  • Université de Lorraine, Université de Strasbourg, Université de Haute Alsace, Bibliothèque nationale et universitaire de Strasbourg, & GTFU Alsace. (2024). L'intelligence artificielle et le bibliothécaire. Zenodo. [accessed on February 19, 2025].

AI and plagiarism

An AI produces original content but it builds its responses using content produced by a third party. This means you are not immune to plagiarism when using GAI. The best way to avoid this is to cite your sources.

Lyon 1 University used Compilatio Magister +, a software capable of detecting content similar to other sources, as well as AI-generated content. Here's what the company that developed it has to say about it:

-Does the Compilatio AI detector adapts to updates in AI? Yes, the detector is regularly updated.

-Does Compilatio Magister+ prove the use of generative AI? The detector hilights "supsicious" sections, potentially written by AI.

-How reliable is Compilation AI detector?

  • Accuracy (=the detector's ability to make mistakes) is estimated at 98.5%: out of 100 passages identified as written by an AI, between 98 and 99 are indeed so, and the remainder have been written by humans.
  • The detector's recall (=its ability not to forget any passages written by AI) is 99%: out of 100 portions of text written by AI, only one was not found.
  • Its accuracy (= its ability to correctly identify portions of text written by humans or by AI) is 99%: out of 100 portions of text, 99 were correctly identified.

 

💡 Did you know? In the absence of a unanimous position from national authorities, some French and foreign universities have taken a stand on the use of generative AI. That is the case of the Universities of Orleans, Geneva and Sherbrooke.

AI and personal data

At the risk of losing control of your intellectual property, it is best not to provide personal data to AI, which will use it for training purposes. It is preferable to transmit only data from the public domain. This is what the University of Orleans recommends in its AI-charter.

💡 Did you know? Europe adopted a text regulating the use and circulation of AIs. It's called the EU AI Act. It came into effect in August 2024. Its aim is to “provide a framework for the development, placing on the market and use of artificial intelligence (AI) systems, which may pose risks to health, safety or fundamental rights”, according to the CNIL. [accessed on March 11, 2025]. 

The AI Act applies to companies with their headquartes in the EU and any company marketing its system in the EU. 

🏆Try the AI Act Game, by Thomas le Goff (maître de conférences (lecturer) in digital law and regulations) et get familiar with it.



 
 
Page 4/5
 
Table of contents
 

5. AI: limits and risks

AI and misinformation

 
 

📢 Using AI, fake information is becoming cheaper to produce and more realistic, blurring the lines between authentic and manipulated content :

·         Deepfakes : fake images or videos, fakeconversations

·         Generating fake or inaccurate articles

·         Generating fake or misleading comments on social media

👉 To be more mindful, some reading recommendations (in french): 
 
To practice recognizing AI-generated images: 

👉 To go further: 


Biases

AI is very different from other tools like a calculator or glasses.

AI is neither neutral nor objective. For more or less commendable reasons, some AIs have been designed to incorporate censorship or deliberate bias. Gemini won't answer political questions. DeepSeek will have an answer aligned to the Chinese regime's policies. Copilot and ChatGPT have been configured to refuse to teach how to carry out malicious actions or generate hateful content.

All AIs have biases, since their algorithms are trained with massive amounts of data that already contain human biases. It can have impactful consequences depending on the type of AI. For example, the article (french) "USA - Des alglorithmes creusent des inégalités face aux soins" (USA - Algorithms deepen inequalities in healthcare). The article is from 2019, and hopefully programmers are working on correcting those biases and improve AIs, but the issue is still relevant today. Biases can be geographical, linguistic, gender-based, ideological, etc. They depend on the training data, filtering choices, and processing methods.

In the future, those biases could intensify: 

« The proliferation of online-generated content is likely to pollute training data that is retrieved through large-scale web harvesting operations. According to some researchers, the proliferation of this content could cause a significant deterioration in the performance of AI models, as it is increasingly integrated into their training data. »

(Traduction, in : Rapport VIGINUM, february 2025)

General AIs mainly collect data from the general public. Academic AIs focus on scientific corpuses. However, even if the corpus is of higher quality, these same AIs produce summaries that fail to demonstrate how representative they are of the state of research on a given subject. They extract data from a limited selection of abstracts and very little from the full text. Furthermore, are abstracts representative of the content of articles?

Hallucinations

"Any information coming from a chatbot can only be true by accident." Alexei Grinbaum

👉 Did you know? AIs do not aim to give an answer that is true. They are probabilistic, which means that they create answers by predicting the most likely next word based on the statistical distribution of the training data.

🎲Try for yourself: AIs are often thought to be more effectives on hard sciences than for humanities, but can make mistakes on very simple mathematical problems. You can try asking : "Alice has [X] brothers and [Y] sisters. How many sisters does Alice's brother have?". AI will often confidently asserts a false answer.

Opacity of sources and unstable economic models

 
Some AIs are not open about the data that feeds their models, probably because they have little regard for the intellectual property rights of the content used to generate answers. French law allows AI to train its models on works that have fallen into the public domain or on other works, based on the text and data mining exception (Article L.122-5, 10° of the Intellectual Property Code) that an author cannot prohibit. However, authors can theoretically prohibit this use (L122-5-3 of the Intellectual Property Code): this is known as the opt-out right. That is if the authors can identify that their works have been used to train AI models...
 
To increase their visibility, improve the reliability of responses, and monetize their content, news outlets such as Lemonde.fr and the AFP have concluded agreements with certain AI providers.
 
👉 To go further (in french): 
Furthermore, AIs do not systematically cite the sources used to answer the prompt. This is the opposite of a search engine, which is an index of sources with URLs that can be sorted by date, type of publication, number of citations, etc. Search engines have opaque algorithms, and AI adds a new layer of opacity by ignoring the transparency of sources.
 
Moreover, the goal of some platforms is primarily to generate income rather than pursue scientific objectives: AI companies that were initially non-profit have become publicly traded companies.
 
👉 To go further (in french): 
 

Environmental and social impact of AI

Do you need a chainsaw to cut a twig?

AI has a very high environmental impact. It justifies a proportionate, responsible, well-informed use. Actually, AI being free and accessible means - as it is the case for many technologies - that there are important "hidden costs", both human and environmental.

👉 Did you know? A ChatGPT request uses 10 times more electricity than a Google search. 

Indeed, as the Ministry for Ecological Transition points out, "generative AI is particularly energy-consuming. The least efficient models consume up to 11 Wh to produce a good-quality image, which is half a phone charge. On average, generating an image consumes 2.9 Wh. The International Energy Agency (IEA) anticipates a tenfold increase in electricity consumption in the AI sector between 2023 and 2026. This increase would contribute to a doubling of the total consumption of data centers, which already account for 4% of global energy consumption." [accessed on March 6, 2025].

AI also consumes large amounts of fresh water, which is used to cool data centers that heat up during use. According to Shaolei Ren, "if 10% of American workers used it once a week for a year to write an email, it would consume 435 million liters of water and 121,517 megawatt hours of electricity. This amount of energy would be enough to power all households in Washington, D.C. for 20 days. [...]" Also according to Shaolei Ren, in 2023, training GPT-3 in Microsoft's data centers in the US consumed up to 700,000 liters of fresh water, a figure that has been little publicized. Global demand for AI could lead to the withdrawal of 4.2 to 6.6 billion cubic meters of water by 2027, equivalent to 4 to 6 times the annual consumption of Denmark or half that of the United Kingdom [accessed on March 6, 2025].

On a social level, the development of AI cannot happen without human labor, which is often outsourced and underpaid. See the article (in french) “Les forçats de l'IA” (The Slaves of AI), published in La Presse canadienne in March 2025 [Accessed on March 14,2025].

👉 To go further: 

 
 
 
 
Page 5/5
 
Table of contents