Start the course : 20' to 45'
5. AI: limits and risks
AI and misinformation
📢 Using AI, fake information is becoming cheaper to produce and more realistic, blurring the lines between authentic and manipulated content :
· Deepfakes : fake images or videos, fakeconversations
· Generating fake or inaccurate articles
· Generating fake or misleading comments on social media
- Le guide pratique de l'IA proposé par l'Université de Genève (The practical guide to AI by the University of Geneva) [accessed on March 14, 2025]
- Le rapport VIGINUM établi en février 2025 dans le cadre du Sommet pour l'action sur l'IA (VIGINUM report for the AI Action Summit, february 2025) [accessed on March 14, 2025]
- Les recommandations de bonnes pratiques de l'INSERM
👉 To go further:
- "Comment reconnaître les images générées par l'intelligence artificielle ?" (How to recognize images generated by artificial intelligence) by the University of Montpellier [accessed on April 14, 2025].
- Google Lens (Google image-based search engine) can help you find the source of an image.
Biases
AI is neither neutral nor objective. For more or less commendable reasons, some AIs have been designed to incorporate censorship or deliberate bias. Gemini won't answer political questions. DeepSeek will have an answer aligned to the Chinese regime's policies. Copilot and ChatGPT have been configured to refuse to teach how to carry out malicious actions or generate hateful content.
All AIs have biases, since their algorithms are trained with massive amounts of data that already contain human biases. It can have impactful consequences depending on the type of AI. For example, the article (french) "USA - Des alglorithmes creusent des inégalités face aux soins" (USA - Algorithms deepen inequalities in healthcare). The article is from 2019, and hopefully programmers are working on correcting those biases and improve AIs, but the issue is still relevant today. Biases can be geographical, linguistic, gender-based, ideological, etc. They depend on the training data, filtering choices, and processing methods.
In the future, those biases could intensify:
« The proliferation of online-generated content is likely to pollute training data that is retrieved through large-scale web harvesting operations. According to some researchers, the proliferation of this content could cause a significant deterioration in the performance of AI models, as it is increasingly integrated into their training data. »
(Traduction, in : Rapport VIGINUM, february 2025)
General AIs mainly collect data from the general public. Academic AIs focus on scientific corpuses. However, even if the corpus is of higher quality, these same AIs produce summaries that fail to demonstrate how representative they are of the state of research on a given subject. They extract data from a limited selection of abstracts and very little from the full text. Furthermore, are abstracts representative of the content of articles?
Hallucinations
👉 Did you know? AIs do not aim to give an answer that is true. They are probabilistic, which means that they create answers by predicting the most likely next word based on the statistical distribution of the training data.
🎲Try for yourself: AIs are often thought to be more effectives on hard sciences than for humanities, but can make mistakes on very simple mathematical problems. You can try asking : "Alice has [X] brothers and [Y] sisters. How many sisters does Alice's brother have?". AI will often confidently asserts a false answer.
Opacity of sources and unstable economic models
- Intelligence artificielle : Un accord de partenariat entre « Le Monde » et OpenAI. [accessed in March 2024]. https://www.lemonde.fr/le-monde-et-vous/article/2024/03/13/intelligence-artificielle-un-accord-de-partenariat-entre-le-monde-et-openai_6221836_6065879.html
-
Intelligence artificielle : Un nouvel accord de partenariat, entre « Le Monde » et Perplexity. [accessed in May 2025]. https://www.lemonde.fr/le-monde-et-vous/article/2025/05/14/intelligence-artificielle-un-nouvel-accord-de-partenariat-entre-le-monde-et-perplexity_6605885_6065879.html
-
L’AFP et Mistral AI annoncent un partenariat mondial | AFP.com. (s. d.). [accessed on June 18, 2025]. https://www.afp.com/fr/lagence/notre-actualite/communiques-de-presse/lafp-et-mistral-ai-annoncent-un-partenariat-mondial
- "#WorkInProgress : IA génératives et outils de recherche de littérature académique" (#WorkInProgress: Generative AI and academic literature search tools), 2025 [accessed on March 24, 2025].
-
Enjeux juridiques liés à l’intelligence artificielle | Occitanie Livre & Lecture. (Legal issues related to artificial intelligence | Occitanie Livre & Lecture.) (s. d.). [accessed on June 18, 2025].
Environmental and social impact of AI
AI has a very high environmental impact. It justifies a proportionate, responsible, well-informed use. Actually, AI being free and accessible means - as it is the case for many technologies - that there are important "hidden costs", both human and environmental.
👉 Did you know? A ChatGPT request uses 10 times more electricity than a Google search.
Indeed, as the Ministry for Ecological Transition points out, "generative AI is particularly energy-consuming. The least efficient models consume up to 11 Wh to produce a good-quality image, which is half a phone charge. On average, generating an image consumes 2.9 Wh. The International Energy Agency (IEA) anticipates a tenfold increase in electricity consumption in the AI sector between 2023 and 2026. This increase would contribute to a doubling of the total consumption of data centers, which already account for 4% of global energy consumption." [accessed on March 6, 2025].
AI also consumes large amounts of fresh water, which is used to cool data centers that heat up during use. According to Shaolei Ren, "if 10% of American workers used it once a week for a year to write an email, it would consume 435 million liters of water and 121,517 megawatt hours of electricity. This amount of energy would be enough to power all households in Washington, D.C. for 20 days. [...]" Also according to Shaolei Ren, in 2023, training GPT-3 in Microsoft's data centers in the US consumed up to 700,000 liters of fresh water, a figure that has been little publicized. Global demand for AI could lead to the withdrawal of 4.2 to 6.6 billion cubic meters of water by 2027, equivalent to 4 to 6 times the annual consumption of Denmark or half that of the United Kingdom [accessed on March 6, 2025].
On a social level, the development of AI cannot happen without human labor, which is often outsourced and underpaid. See the article (in french) “Les forçats de l'IA” (The Slaves of AI), published in La Presse canadienne in March 2025 [Accessed on March 14,2025].
👉 To go further:
- "Les sacrifiés de l'IA" (The victims of AI), a documentary by France TV (2025) [accessed on March 14, 2025]
- "Impacts de l'IA" : préconisations du Conseil économique social et environnemental (Cese, september 2024). (Impacts of AI : recommendations of the Economic, Social, and Environmental Council) [accessed on March 14, 2025]
- "Impacts de l’intelligence artificielle sur le travail et l'emploi", dossier du Labo société numérique (Impacts of artificial intelligence on work and employment) (february 2025). [accessed on March 14, 2025]
- "Travail du clic, sans qualité" (Click work, no quality) (CNRS éd., 2023). [accessed on March 14, 2025]
Table of contents
-
5. AI: limits and risks (You are here)