Disinformation Amplified by AI
Disinformation, the intentional dissemination of false or misleading information to manipulate public opinion, represents a growing challenge in the digital age. With the advent of artificial intelligence (AI), this phenomenon is growing exponentially, enabling the creation and spread of fake content on an unprecedented scale. From tools like deepfakes—videos or audio manipulated to simulate real people—to automated bots flooding social media, AI is lowering the technical and financial barriers to producing misleading content. According to a report by the Organization for Security and Co-operation in Europe (OSCE), AI could influence elections in more than 50 countries by 2024 by generating targeted fake content. This article explores the mechanisms of this amplification, its impacts, and strategies to address it, drawing on verified sources from international organizations, academia, and technology experts.
PERICOLOGYARTIFICIAL INTELLIGENCE
9/1/20253 min read


What is disinformation?
To fully understand the role of AI, it is essential to distinguish disinformation from related concepts. Disinformation is false information deliberately created and disseminated to deceive, often for political, economic, or ideological purposes. It is differentiated from misinformation, which is an unintentional error, and malinformation, which uses real but decontextualized facts to cause harm. For example, a Canadian government report defines malinformation as truthful information shared maliciously, such as leaked private data amplified by AI to harass or discredit.
Historically, misinformation existed through rumors or traditional media, but AI is making it more sophisticated. Tools like generative language models (LLMs) produce text, images, or videos that closely mimic reality, making detection difficult. A Georgetown University study highlights that AI strengthens misinformation operations by bridging societal gaps, such as political divisions. In 2023, researchers observed that false information spreads six times faster than real information on social media, thanks in part to AI.
How AI amplifies misinformation
AI acts as a force multiplier for disinformation through several mechanisms. First, deepfakes: these synthetic contents use deep learning algorithms to superimpose faces or voices onto existing videos. A notable example is the 2022 pro-China campaign, where AI avatars spread fictional videos on Facebook and Twitter to promote the interests of the Chinese Communist Party. Tools like Synthesia make it easy to create these deepfakes, although moderation teams try to limit them.
Next, bots and automated networks: AI drives fake accounts that amplify messages. In 2024, researchers identified Russian campaigns using AI to clone the voices of public figures, such as British public sector employees, to spread disinformation. A study by Meduza reveals that networks like Pravda produced 3.6 million AI articles in 2024 to influence Western chatbots like ChatGPT.
Finally, mass content generation: LLMs like those from Google or OpenAI can produce articles, posts, or images en masse. A Google study itself indicates that AI has become the main vector of misinformation, with the problem severely underestimated. Chatbots have spread false claims, such as links between vaccines and autism, reaching millions of users. The accessibility of these tools, as explained in a report by DW Akademie, allows anyone to create false content on a large scale.
Concrete examples of malicious use
Elections are fertile ground for AI disinformation . In 2024, deepfakes disrupted polls in several countries. In the United States, a robocall imitating Joe Biden's voice discouraged Democratic voters in the New Hampshire primary. In Slovakia, doctored audio influenced election results. In India, AI memes were used to discredit political rivals, often shared by candidates themselves.
Beyond elections, AI amplifies crises. During the Israel-Iran conflict, a wave of deepfakes flooded social media. Doctored images of an explosion at the Pentagon in 2023 temporarily caused stock markets to plummet. In France, studies such as that of the Observatory of Inequality show that generative AI facilitates disinformation on social media, with impacts on public trust.
Impacts on society and institutions
The consequences are multiple. On the democratic front , AI erodes trust: a report from the World Economic Forum ranks disinformation as the most serious short-term risk. It polarizes opinions, as seen with conspiracy theories amplified by chatbots. Economically, companies suffer losses: deepfakes have caused insurance fraud and stock market crashes.
Socially, it fosters hatred and violence. Intimate deepfakes, such as those of Taylor Swift in 2024, have led to temporary blocks on X. A UNESCO report notes that 73% of female journalists have experienced online violence linked to AI. Finally, it affects public health: chatbots have spread lies about vaccines, contributing to outbreaks such as the measles outbreak in Florida.
Mitigation and prevention strategies
Faced with these challenges, solutions are emerging. Technically , detection tools like those from MIT help identify deepfakes by analyzing inconsistencies (eyes, shadows). Initiatives like the Coalition for Content Provenance and Authenticity (C2PA) are developing watermarks to certify the origin of media.
Regulatory laws are multiplying. In the United States, platforms like Meta require labels for AI content. In Europe, the Digital Services Act imposes obligations on platforms to combat disinformation. Denmark is considering a copyright amendment to protect personal images from deepfakes.
Education is key: Campaigns like the UN's #TakeCareBeforeYouShare raise awareness about lateral checking (looking for the source elsewhere). Experts like those at Virginia Tech recommend verifying sources and avoiding algorithmic filter bubbles. AI itself can counter misinformation: algorithms analyze patterns to moderate content, as at FactCheck.org .
Finally, an interdisciplinary approach is necessary. Projects like VERA.ai combine AI and human expertise to verify facts. A Frontiers study highlights that humans remain essential to avoid algorithmic censorship.
Disinformation amplified by AI poses a systemic risk to democracy and society by exploiting human and technological vulnerabilities. However, with advances in detection, regulation , and education, these threats can be mitigated. Organizations like the Brookings Institute call for global collaboration to balance innovation and security. Moving forward, the challenge will be to develop ethical AI that protects rather than deceives, while preserving freedom of expression. Citizens, governments, and businesses must act collectively to preserve the integrity of information in an increasingly connected world.
Jean Bourdin, Founder of Pericology, 2025, © all rights reserved.
For enthusiasts
Our links
Message
© 2025. All rights reserved. By Pericology