Anticipating the systemic peril of disinformation amplified by AI
In a world where artificial intelligence (AI) blurs the line between reality and fiction, disinformation, amplified by deepfakes and automated content, threatens democracy, social cohesion, and economic stability. Faced with this crisis, a proactive approach, inspired by Pericology—a systemic and bio-inspired philosophy—proposes anticipating risks through peripheral vigilance and collective cooperation. This book analyzes the dynamics of AI-driven disinformation, explores solutions such as weak indicators and adaptive strategies, and promotes intelligent prevention to strengthen the resilience of information ecosystems: “See first, block first.”
ARTIFICIAL INTELLIGENCEDISINFORMATION
Jean Bourdin, Founder of Pericology
9/5/202515 min read


Introduction
In a world where artificial intelligence (AI) is becoming increasingly integrated into our daily lives, the line between fact and fiction has blurred considerably. The speed with which information circulates today, fueled by sophisticated algorithms, offers both unprecedented opportunities for knowledge and cooperation, but also opens the door to major systemic perils. Among these, disinformation amplified by AI presents itself as an insidious threat, capable of infiltrating consciousness, manipulating public opinion, and weakening the fundamental institutions of our societies.
This phenomenon, often referred to as “fake news,” is no longer limited to the simple spread of rumors or false news. It has become a complex system, where manipulated content—deepfakes, automated texts, targeted disinformation campaigns—spreads at an exponential rate, rendering any traditional response insufficient or even outdated. AI's ability to generate credible and viral content increases the risks of a massive deployment of false information, the impact of which can be devastating for democracy, social cohesion, and economic stability.
Faced with this threat, it is imperative to adopt a proactive, preventative, rather than a reactive approach. The philosophy guiding this reflection is that of Pericology, a systemic and bio-inspired approach, which aims to anticipate tipping points before they occur, by developing peripheral vigilance and cooperative dynamics. Inspired by nature and its self-regulating mechanisms, this approach encourages us to see ahead, in order to be able to steer ahead.
This book therefore proposes to model and analyze the complex dynamics of disinformation amplified by AI, integrating technological, social, and systemic dimensions. We will explore how the implementation of weak indicators, cooperation between actors, and the adaptation of strategies at the collective level can constitute essential levers to prevent this systemic crisis. Our objective is to offer a clear and strategic vision to anticipate these risks, by strengthening the resilience of our information ecosystems.
Ultimately, this approach is part of a logic of intelligent prevention: “See first, stop first.” It invites us to deploy active vigilance and cooperation inspired by biological models, in order to avoid catastrophe before it even occurs.
Chapter 1: The Nature of the Peril – Disinformation Amplified by AI
1.1 Definition and mechanisms
At the heart of the modern information crisis is the ability of artificial intelligence to generate, manipulate, and disseminate fake content at an unprecedented scale and speed. AI-enhanced disinformation is more than just spreading fake news: it is a sophisticated system where manipulation leverages advanced technologies to produce content that is credible, even indistinguishable from reality.
The main mechanisms are:
Deepfakes : images, videos or audio synthesized by AI, allowing people, real or fictional, to speak or appear at times or in completely invented contexts.
Automated texts : generation of written content, articles or viral messages using linguistic models such as GPT, capable of producing convincing speeches, often without the reader knowing.
Targeted campaigns : Use of algorithms to precisely target social, political, or economic groups with messages designed to influence, polarize, or disorient.
These mechanisms exploit the speed, scale, and apparent credibility of AI to saturate the information space, making verification and detection of false information increasingly difficult.
1.2 Technological dimensions
The rapid evolution of AI technologies has made disinformation increasingly sophisticated:
Content generators : natural language processing models (GPT, BERT, etc.) that create coherent, credible and contextual texts.
Visual and audio creation tools : Software capable of producing realistic videos and sounds, such as deepfakes, which can depict public figures or ordinary citizens in actions or words they never said.
Viral diffusion algorithms : social platforms and search engines that optimize the spread of content, amplifying its visibility and impact.
This technological triptych makes the line between information and disinformation increasingly blurred, and raises immense challenges for verification and accountability.
1.3 Socio-cultural dimensions
The spread of disinformation is not limited to the technological sphere: it is part of a complex social and cultural context.
Loss of trust : Faced with the proliferation of false information, trust in traditional media, institutions and even in reality itself is eroding.
Social polarization : The targeted dissemination of manipulated content accentuates divisions, encourages radicalization, and weakens the democratic fabric.
Manipulation of opinion : By exploiting emotions, cognitive biases and social networks, disinformation shapes perceptions, influences votes, and can generate extreme social or political movements.
The feeling of uncertainty and mistrust thus becomes fertile ground for the rise of extreme or destabilizing discourse.
1.4 Systemic dimensions
Beyond the technological and social aspects, disinformation amplified by AI poses major systemic risks:
Impact on institutions : The credibility of governments, the media and key actors is weakened, which can lead to a crisis of legitimacy.
Weakening of information ecosystems : the proliferation of fake news makes it difficult to circulate reliable information, creating an environment where truth becomes relative.
Risks to social and economic stability : Coordinated disinformation campaigns can provoke economic crises, social tensions, and even open conflict.
This peril, if not anticipated and controlled, could lead to a global crisis of confidence, threatening the very functioning of our democracies and our modern societies.
In summary , AI-amplified disinformation poses a systemic threat that operates at the intersection of technological, social, and systemic dimensions. It is part of a dynamic where the speed, credibility, and reach of manipulated content often exceed the ability of traditional actors to respond effectively.
Chapter 2: Systemic and bio-inspired dynamics
2.1 Understanding the systemic complexity of disinformation
The spread of disinformation amplified by AI cannot be understood solely through a linear or purely technological lens. It is part of a complex system where multiple actors, mechanisms, and feedback loops interact dynamically.
This system is characterized by:
Multi-level : From AI-generated content to delivery platforms to human receivers, each level influences and is influenced by the others.
Feedback : The virality of certain content fuels its spread, creating reinforcement or amplification loops.
Non-linearity : Small events or weak signals can trigger large-scale cascades of disinformation, making control difficult.
To model these dynamics, it is useful to draw inspiration from the principles of ecology and biological systems, where stability, resilience, and self-regulation have been forged by evolution.
2.2 Analogies with biological ecosystems
Natural ecosystems, such as forests or oceans, are complex living systems, capable of self-regulation, resisting disturbances, and returning to balance after shocks.
Some of these bio-inspired mechanisms offer avenues for understanding and regulating the spread of disinformation:
Resilience : The ability of an ecosystem to absorb a disturbance (e.g., viral misinformation) without fundamentally changing its structure.
Self-organization : In nature, local mechanisms (such as regulation by predators or parasites) allow growth or propagation to be controlled, without central intervention.
Symbiosis and mutualism : Some organisms live in harmony, sharing resources to enhance their survival. This cooperation can inspire strategies to strengthen trust and veracity in the digital space.
By adopting these principles, it becomes possible to imagine systems of self-regulation and prevention that rely on local cooperation and collective regulation, rather than on a head-on fight against disinformation.
2.3 Collective regulation mechanisms
In nature, the stability of a system often relies on collective regulatory mechanisms. These processes can be transposed to our information ecosystems:
Local control buffet : for example, communities or platforms that verify, filter or report suspicious content.
Surveillance networks : deployment of actors (individuals, organizations, AI) forming a surveillance network adapted to the local or global scale.
Adaptive feedback : the system's ability to adjust its reactions in real time, quickly detecting and mitigating the spread of false information.
These participatory mechanisms strengthen collective resilience, just as the natural ecosystem maintains its balance in the face of invasions or disturbances.
2.4 The concept of equilibration and tipping thresholds
A key concept in systems science is that of critical thresholds or tipping points : beyond a certain level of disturbance, the system can tip into an unstable or unbalanced state.
In the context of disinformation, this could correspond to a threshold where the majority of information becomes false or manipulated, leading to a total loss of trust or a crisis of legitimacy.
The goal of the bio-inspired approach is to identify these low thresholds and intervene before they are crossed, strengthening the resilience of the system and maintaining balance.
2.5 Application to the prevention of disinformation
By integrating these principles into our prevention strategies, we can:
Establish local checkpoints to strengthen early detection.
Promote cooperation between actors —humans and machines—for collective regulation.
Develop weak indicators to anticipate the emergence of a systemic crisis.
Rely on adaptive models that continuously adjust our responses to rapidly evolving manipulation techniques.
In summary , bio-inspired modeling of system dynamics offers a rich and effective vision for understanding the spread of disinformation amplified by AI. It highlights the importance of an integrated approach, where each actor, each local mechanism, contributes to global stability, thus making it possible to anticipate and prevent dangers before they become uncontrollable.
Chapter 3: Peripheral Vigilance – “Seeing Ahead”
3.1 The importance of proactive observation
In a context where the speed at which disinformation spreads can lead to rapid and destabilizing crises, it becomes essential to adopt a peripheral vigilance approach. Rather than waiting for a crisis to erupt and the damage to be irreversible, it is strategic to constantly monitor the information environment, looking for weak warning signals.
Proactive observation aims to capture these weak signals—those subtle clues that precede a major crisis—and intervene early. This requires the use of advanced tools, data science, and heightened collective vigilance.
3.2 Weak signals and early indicators
Weak signals are inconspicuous but significant clues that indicate destabilization or dangerous developments in the system. Their early detection makes it possible to anticipate the spread of disinformation.
Examples of weak indicators in the context of AI-amplified disinformation:
Abnormal increase in suspicious content in certain networks or platforms.
Disparities in the spread of certain topics between different regions or communities.
Anomalies in the sharing speed or popularity of certain content.
Overrepresentation of certain keywords or themes related to manipulation or polarization.
Coordinated activities of accounts or bots to amplify certain discourses.
These signals, if detected early, offer a window of action to slow the spread or to mobilize countermeasures.
3.3 Technological detection tools
To identify these weak signals, it is crucial to mobilize a range of technological tools:
AI for monitoring and semantic analysis : systems capable of analyzing broadcast content in real time, identifying inconsistencies or detecting artificially generated content.
Social network mapping : visualization of information flows, identification of key nodes, detection of clusters of suspicious activity.
Trend analysis : monitoring of thematic developments, virality, and variations in the distribution of certain content.
Automatic alert systems : trigger notifications when anomalies or weak signals are detected.
These tools must be integrated into a coherent platform, allowing continuous monitoring and rapid reaction.
3.4 The strategic observation approach
Proactive observation is not limited to data collection. It involves a strategic approach:
Clear definition of sensitive areas : subjects, actors, networks to be monitored as a priority.
Contextual analysis : interpreting weak signals taking into account the socio-political, cultural, and technological context.
Continuous adjustment : refining tools and indicators based on the evolution of the threat.
Integration into collaborative governance : bringing together civil, institutional and technological actors for shared vigilance.
The goal is to be able to "see before" the crisis becomes uncontrollable, by mobilizing collective and technological intelligence.
3.5 Complementarity between technologies and human vigilance
Despite the advancement of automated tools, human vigilance remains essential. Artificial intelligence can process an impressive volume of data and spot anomalies, but detailed interpretation, understanding context, and strategic decision-making require human expertise.
A hybrid approach, where the machine provides initial detection and the human decides or delves deeper, is the most effective.
3.6 Concrete applications
Here are some examples of concrete applications of this peripheral vigilance:
Real-time monitoring platforms : dashboards integrating key indicators to track the spread of false information.
Community monitoring networks : groups of actors engaged in detecting and reporting suspicious content.
Early warning systems : devices to quickly alert those responsible and deploy countermeasures.
Public-private partnerships : collaboration between authorities, technology companies and civil society for effective surveillance.
In short , peripheral vigilance is the first line of defense to see ahead, to anticipate the crisis as it looms. By combining advanced technological tools and informed human vigilance, it becomes possible to identify weak signals early, and thus deploy preventive measures before disinformation becomes uncontrollable.
Chapter 4: Cooperative Prevention – “Barring Before”
4.1 The spirit of cooperative prevention
Given the complexity of the phenomenon, an effective response cannot rely solely on detection or repression. It must be based on a proactive, collaborative, and systemic approach, in which each actor participates in regulating the information ecosystem. Cooperative prevention aims to establish a dynamic network of vigilance and collective action to anticipate and interrupt the spread of false information before it gains momentum.
Inspired by the mechanisms of symbiosis, self-organization, and collective regulation present in nature, this approach favors territorial, institutional, technological, and citizen cooperation.
4.2 Nature-inspired regulatory mechanisms
In nature, the stability of an ecosystem often relies on self-organizing and cooperative mechanisms. These principles can inform our prevention strategies:
Local self-organization : Local verification and reporting communities or networks that empower themselves to filter and control the spread of questionable content.
Feedback regulation : the ability of certain actors or systems to adjust their actions based on received signals, thereby enhancing overall stability.
Sharing resources and knowledge : promoting the pooling of knowledge, tools and best practices to strengthen collective resilience.
The key is to create a cooperative fabric where vigilance is not centralized, but distributed and shared, like biological networks.
4.3 The establishment of monitoring networks
To strengthen this cooperative prevention, it is essential to establish active vigilance networks , made up of various actors:
Institutional actors : authorities, regulatory agencies, public media.
Private actors : digital platforms, technology companies, private media.
Civil society : NGOs, associations, engaged citizens.
Experts and researchers : specialists in IT, social sciences, communication.
These networks can operate at local, national or transnational levels, relying on collaborative tools and shared platforms.
4.4 Solutions inspired by symbiosis and collective regulation
Interdependence and cooperation are key levers for strengthening prevention. Here are some concrete examples:
Verification "clusters" : groups of actors specialized in the detection and correction of suspicious content, who collaborate in real time.
“Commitment contracts” : agreements between platforms, media and institutions to coordinate their detection and correction actions.
"Feedback systems" : mechanisms that allow citizens and stakeholders to quickly report questionable content, active in local regulation.
Like symbioses in nature, these collaborations must be based on trust, reciprocity, and continuous adaptation.
4.5 Community and educational approaches
Beyond technological tools, prevention also relies on education and citizen empowerment:
Awareness programs : training in critical reading, source verification, and detection of manipulated content.
Citizen mobilization : encourage active participation in information monitoring, via participatory platforms.
Culture of verification : promoting the daily practice of verification and questioning.
It is these social dynamics that strengthen the resilience of the system in the face of the spread of false information.
4.6 Synergy between technology and cooperation
The effectiveness of cooperative prevention relies on the synergy between advanced technological tools and human mobilization. Technology facilitates detection, visualization, and communication, while the human factor ensures interpretation, contextualization, and decision-making.
It is a true partnership where each actor, whether human or machine, plays a complementary role in the barricade against disinformation.
In summary , cooperative prevention is an essential strategy for blocking the spread of disinformation amplified by AI. By mobilizing vigilance networks, drawing inspiration from nature's regulatory mechanisms, and strengthening collective responsibility, it is possible to anticipate and intervene upstream, before the crisis becomes unmanageable.
Chapter 5: Integrated Modeling and Adaptive Strategies
5.1 The interest of systemic modeling
Faced with the growing complexity of the disinformation phenomenon amplified by AI, it is no longer enough to react to the crisis once it has been declared. It is becoming essential to develop modeling tools that can represent the dynamics at play on an integrated scale. These models offer a synthetic view, facilitate the understanding of tipping points, and help develop preventive and adaptive strategies.
Integrated modeling combines:
Technological dimensions (algorithms, networks, diffusion).
Social components (behaviors, networks, trust).
Systemic mechanisms (feedback, thresholds, resistance).
The goal is to create a simulator capable of representing the complexity of propagation and regulation, as well as anticipating the effects of different interventions.
5.2 Bio-inspired approach to modeling
Models from biology and ecology provide valuable paradigms:
Self-organizing control networks : simulating how “local control points” can regulate diffusion.
Resilience models : represent the ability of a system to absorb disturbances without tipping over.
Critical thresholds and tipping points : identify thresholds to monitor for prevention.
These models make it possible to virtually experiment with different prevention strategies, evaluate their effects, and adapt responses accordingly.
5.3 Adaptive strategies and learning
A key characteristic of biological systems is their adaptive capacity. We must translate this capacity into our strategies:
Continuous learning loops : continuously adjust models and strategies based on new signals, technological developments and feedback.
Dynamic threshold adjustment : increase or decrease the sensitivity of detection devices depending on the context.
Scenario simulation : testing, in advance, the impacts of different interventions (strengthening verification, education campaigns, technological regulation).
These adaptive strategies ensure a flexible, effective, and resilient response to a constantly evolving threat.
5.4 Modeling Tools and Technologies
To implement these strategies, several tools can be used:
Network theory-based models : to analyze propagation in social networks.
Multi-agent simulators : to represent the dispersed behaviors of individuals, bots, and platforms.
Nonlinear dynamic models : to represent critical thresholds, exponential growth, or saturation.
Artificial intelligence and machine learning : to continuously refine modeling based on real-time data.
The integration of these tools provides a dynamic systemic vision, essential for developing robust strategies.
5.5 Prospective scenario writing
One of the key objectives of the models is the ability to simulate different future scenarios:
Controlled crisis scenario : rapid intervention to stop the spread.
Tip-over scenario : gradual loss of confidence, systemic crisis.
Resilience scenario : system that resists and recovers quickly.
These scenarios fuel strategic thinking, operational preparation, and the prioritization of actions.
5.6 A systemic governance approach
For these models to be truly effective, they must be integrated into systemic governance, involving all the actors in the ecosystem:
Sharing data and models .
Co-construction of strategies .
Implementation of collaborative platforms for monitoring, modeling, and simulation.
Regular review based on feedback and threat developments.
The challenge is to transform these tools into real levers of collective anticipation.
In summary , integrated modeling and adaptive strategies represent an essential step to anticipate, understand, and effectively respond to the dynamics of disinformation amplified by AI. By relying on bio-inspired paradigms, these approaches allow for upstream experimentation of policies and actions, thus ensuring a flexible and resilient response to a constantly evolving threat.
Chapter 6: Case studies and prospective scenarios
6.1 Introduction to the scenario approach
The objective of this chapter is to explore, through concrete examples and simulations, how the strategies discussed in the previous chapters can be applied to anticipate and limit the spread of false information in a complex digital environment.
Prospective scenarios allow us to identify tipping points, assess the effectiveness of preventive measures, and continuously adjust our strategies.
6.2 Scenario 1: The rise of a targeted disinformation campaign
Context :
A social media platform detects an abnormal increase in content related to a sensitive topic, with a high concentration of bots amplifying a destabilizing message. Weak signals indicate a coordinated mobilization.
Simulation :
Systemic modeling predicts that if no intervention is taken, the spread will reach a critical threshold within 48 hours, leading to a major crisis of confidence.
Preventive actions:
Immediate activation of local community networks to report and verify content.
Strengthening automatic detection via AI for massive filtering.
Targeted awareness campaigns to defuse disinformation.
Expected result:
A significant reduction in the spread, avoiding the tipping point into a systemic crisis.
6.3 Scenario 2: The breakdown of resilience in the face of a geopolitical crisis
Context :
A fake, manipulated video released by a foreign entity threatens to spark a diplomatic crisis. The speed of propagation is exponential, and trust in traditional media is diminishing.
Simulation :
Bio-inspired modeling shows that if local regulation and cooperation are not strengthened, the system tips into a state of irreversible crisis.
Intervention strategies:
Instant mobilization of a global participatory verification network.
Rapid implementation of an alert system between institutional actors.
Transparent communication to restore trust.
Expected result:
Effective management, stopping the crisis before it becomes a catastrophe.
6.4 Scenario 3: Systemic resilience in the face of a deepfake invasion
Context :
A series of deepfake videos depicting public figures is being circulated, sparking widespread distrust. The threat lies in the complete loss of confidence in public image.
Simulation :
Systemic and adaptive modeling shows that if an international coalition establishes a joint observatory using advanced detection AI, combined with massive educational communication, resilience can be maintained.
Recommended actions:
Deployment of a global real-time verification network.
Increased education to raise public awareness.
Technological regulation to limit the creation of deepfakes.
Expected result:
Stabilization of social trust, with the ability to quickly detect and defuse deepfakes.
6.5 Lessons learned from these scenarios
Proactivity, through modeling and simulation, makes it possible to predict tipping points.
Cooperation, combined with technological regulation and education, is the cornerstone of prevention.
The ability to adapt in real time, through dynamic strategies and modeling tools, is essential to address rapidly changing threats.
Conclusion
These scenarios illustrate that anticipation, integrated modeling, and bio-inspired cooperation offer powerful levers for addressing the systemic peril of AI-amplified disinformation. By combining proactive monitoring, adaptive strategies, and multi-stakeholder alliances, it becomes possible to transform a dangerous environment into a resilient space capable of preserving social cohesion and truth.
See before, stop before — Towards a resilient collective intelligence
Faced with the rise of an unprecedented threat, where artificial intelligence amplifies the spread of misinformation at an alarming speed and scale, our ability to anticipate and act early becomes a vital necessity. This ebook explores a systemic, bio-inspired, and cooperative approach aimed at transforming the threat into an opportunity to strengthen collective resilience.
We have shown that integrated modeling, combining weak signals, natural regulatory mechanisms, and adaptive strategies, constitutes a powerful tool for predicting tipping points and deploying preventive actions. Peripheral vigilance, by mobilizing technology and human intelligence, makes it possible to "see before" the crisis. Cooperative prevention, relying on local and global vigilance networks, promotes collective "barricading," inspired by the dynamics of nature.
This is not just a technological battle against manipulated content, but a profound transformation of our relationship with information. Collective responsibility, education, and shared governance must become the pillars of a robust information ecosystem capable of resisting waves of manipulation.
Finally, this approach must not remain theoretical. It calls for a concrete mobilization of all stakeholders—citizens, institutions, businesses, researchers—to deploy these strategies, deploy these tools, and cultivate trust and truth in our society.
Seeing ahead, stopping ahead means choosing prevention rather than cure. It means trusting in collective intelligence, responsible innovation, and the ability of our system to self-regulate. By acting today, we can preserve the vitality of our democracies, the stability of our societies, and trust in our institutions.
The future belongs to those who anticipate, collaborate, and innovate for a healthy, resilient, and informed information environment.
Jean Bourdin, Founder of Pericology, 2025, © all rights reserved.
Annexes
Appendix 1: Key Concepts
Pericology
A systemic and bio-inspired approach aimed at anticipating tipping points in a complex system, by developing proactive rather than reactive vigilance.
Holopraxie
Dimension of Pericology which integrates concrete, technological and systemic action, combining technology, society, and the environment for global regulation.
Ecosynpraxia
Approach inspired by collaborative dynamics in nature, aiming to model and apply mechanisms of collective regulation, self-organization, and resilience.
Weak signals
Early indications or marginal indicators that signal a critical development in a system, requiring careful monitoring.
Tipping thresholds
Critical points or thresholds in a system where a small disturbance can lead to radical and often irreversible change.
Appendix 2: Tools and resources for detection and prevention
Real-time monitoring tools :
Botometer (to detect bots on Twitter)
Google Trends (to track trends in topics)
Loomio (collaborative platform for collective monitoring)
Semantic analysis and deepfake detection software :
Deepware Scanner (to detect deepfakes)
Microsoft Video Authenticator (to authenticate videos)
OpenAI GPT detectors (to identify AI-generated texts)
Collaborative platforms :
Eris (Crowdsourced Verification Network)
NewsGuard (media reliability rating)
Learning resources :
Hoax-Slayer
Media Literacy Project
Appendix 3: Concepts and theories
Network theory
Mathematical model for analyzing the diffusion of information through social networks, allowing the identification of key nodes and points of vulnerability.
Systemic resilience models
An approach to measuring and strengthening a system's ability to withstand disturbances, based on absorption and recovery capacity.
Systemic and feedback
Analysis of feedback loops, positive or negative, which influence the stability or destabilization of a system.
Appendix 4: Bibliography and additional resources
Books:
"La Péricologie" by Jean Bourdin — a fundamental work on systems thinking and prevention.
"Collective Intelligence" by Pierre Lévy — on cooperation and collective knowledge.
"Disinformation and Manipulation" by Paulina B. et al. — on disinformation and its impacts.
Articles and reports:
UNESCO report on combating disinformation.
OECD studies on the resilience of information systems.
IEEE Publications on AI Ethics and Manipulated Content Detection.
Sites web :
First Draft (collaborative verification media)
Digital Forensic Research Lab
The Data & Society Research Institute
For enthusiasts
Our links
Message
© 2025. All rights reserved. By Pericology