Analysis of the vulnerabilities of artificial intelligence : a pragmatic and objective view

Artificial intelligence (AI) is now ubiquitous in many sectors, from finance and medicine to cybersecurity and transport. However, despite its spectacular advances, AI remains vulnerable to various threats and limitations. This article offers an objective and pragmatic analysis of the main weaknesses of AI and their implications.

ARTIFICIAL INTELLIGENCE

Jean Bourdin, Founder of Pericology

2/13/20253 min read

Artificial intelligence (AI) is now a ubiquitous technology, integrated into diverse fields such as health, finance, transport, security and even entertainment. While its benefits are undeniable, it is essential to recognise and analyse the vulnerabilities inherent in this technology. This article explores these weaknesses in a neutral and factual manner, drawing on academic and professional sources.

1. Bias and algorithmic discrimination

One of the major vulnerabilities of AI lies in the biases that can be introduced when collecting data or designing models. These biases result in discriminatory prejudices against certain social, ethnic or economic groups.

- Problem: The algorithms learn from the historical data provided. If this data contains inequalities or stereotypes, AI runs the risk of reproducing or even amplifying these trends.

- Example: In 2018, Amazon abandoned an automated recruitment system because it favoured men in technical positions due to predominantly male training data (Source: Reuters).

- Potential solution: Develop methods for detecting and correcting bias, such as using more balanced data sets and implementing strict regulations.

2. Opposing attacks

AI systems are vulnerable to adversarial attacks, where malicious actors manipulate inputs to induce significant errors in the algorithm's predictions or decisions.

- Problem: A small change that is imperceptible to a human can mislead an AI model. For example, adding noise to an image can convince a neural network that it is a completely different object.

- Example: Researchers have shown that it is possible to trick facial recognition systems using specially designed glasses to bypass identification (Source: CVPR 2018).

- Potential solution: Strengthen models with techniques such as adversarial training or the use of multi-level verification mechanisms.

3. Lack of transparency

Many AI models, particularly those based on deep neural networks, operate like "black boxes". This means that it is difficult, if not impossible, to understand how they arrive at their conclusions.

- Problem: This lack of transparency poses ethical and practical challenges, particularly when it comes to making critical decisions in sectors such as justice or medicine.

- Example: In the legal field, certain AIs have been criticised for their opacity in decision-making processes concerning sentences or conditional releases (Source: ProPublica).

- Potential solution: Develop interpretable models or explain results using tools such as LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations).

4. Data security and cybersecurity

AI systems often require massive volumes of data to be trained effectively. This accumulation of sensitive data is an attractive target for cyber attacks.

- Problem: The leakage of personal or confidential data can have serious consequences, ranging from violations of privacy to breaches of national security.

- Example: In 2020, a flaw in a medical AI algorithm enabled researchers to reconstruct original medical images from partially anonymised data (Source: Nature).

- Potential solution: Use techniques such as homomorphic encryption or learning federation to protect data while allowing it to be used.

5. Fragility in the face of unforeseen conditions

AI systems are generally optimised to operate in controlled environments or environments similar to those used in their training. They can therefore prove ineffective or erroneous when confronted with novel situations.

- Problem: This fragility limits the adaptability of AI models in dynamic or evolving contexts.

- Example: Autonomous cars, despite being extensively tested, still encounter difficulties in unusual scenarios, such as damaged traffic lights or unpredictable behaviour by other road users (Source: IEEE Spectrum).

- Potential solution: Improve the generalisation of models using approaches such as reinforcement learning or continuous learning.

6. Technological dependence and concentration of resources

The growing complexity of AI systems is leading to heavy dependence on a few large companies or institutions with the necessary infrastructures.

- Problem: This concentration can limit access to AI for smaller players, accentuating technological inequalities.

- Example: Tech giants such as Google, Microsoft and Alibaba are investing massively in AI, creating a barrier to entry for start-ups and developing countries (Source: McKinsey).

- Potential solution: Promote open source initiatives and encourage international collaboration to democratise access to AI.

Conclusion

Although artificial intelligence offers many opportunities, its vulnerabilities must be taken into account to ensure its responsible and safe development. Issues relating to bias, adversarial attacks, transparency, cybersecurity, robustness and concentration of resources underline the need for a multidisciplinary approach combining scientific research, regulation and ethics.

By adopting a proactive and collaborative perspective, it will be possible to minimise these vulnerabilities and maximise the benefits of AI for all.

Jean Bourdin, Founder of Péricologie 2025, © all rights reserved

#Péricology #ArtificialIntelligence #AI Vulnerabilities #AI Ethics #AI Risks #TechnicalAnalysis #PragmaticApproach #AIsecurity #AI RiskManagement #Péricology #Technical vulnerabilities #Objective analysis #AI research #AIrisksassessment #Cybersecurity #EmergingTechnologies #AI impact #AI strategies #Risk management #TechTrends #Pericology