AI is a fascinating field that is booming. Recent developments in various models have democratized AI, and that has sparked very diverse reactions. Many people are enthusiastic about this technological advancement and the different possibilities it opens up, while others remain more cautious. And with good reason: AI hasn't always been well represented in our society. While it's not universal, the world of fiction is full of malicious AIs. Skynet in Terminator, HAL 9000 in 2001: A Space Odyssey, or Agent Smith in The Matrix - all paint a frightening picture of AI.
Other factors must be considered to understand this distrust. The fact that our current AIs are black boxes with few ways to understand how they work, even with the necessary technical knowledge, the small number of organizations capable of building large-scale AI, and so on. These are factors that make it difficult for some to see AI as wonderful. To address this, a concept has emerged in the AI field: "Trustworthy AI" - AIs that are "worthy of trust."
A "Trustworthy" AI is one that meets a set of criteria that would allow users, and even society as a whole, to have full confidence in how it works. Saying that covers everything and nothing at once, but the idea behind this field is to create a world where AIs are fully integrated into society without harming anyone. Although I've never come across this exact term, I like to call them sustainable AIs, in the spirit of sustainable development: AIs must improve our quality of life without ever degrading it. Now the question becomes: what are the criteria for building these AIs?
To answer that, I'll draw on two distinct sources: Mozilla's white paper on "Trustworthy AI" and the report from the European Union's working group on the subject. These are very comprehensive documents, with a strong focus on protecting individual rights. This was a deliberate choice of sources. I want to present here an almost utopian, yet in my view achievable, vision of a world where AIs and humans coexist.
The first point to address when talking about sustainable AI is compliance with legislation. This has two aspects: AIs must respect the law, and they must lead their users to respect it as well. This point is very context-dependent, as laws and customs vary by country and evolve over time. Moreover, the laws governing AI are still under construction. The EU recently approved the AI Act (which we'll discuss further on this blog), and member states still need to transpose these directives. The United States is also in the process of legislating on the subject.
However, even though laws are still being drafted, we can already say that sustainable AIs will need to uphold certain principles. The EU's guidelines are indeed based on a set of shared fundamental values and rights. Among these fundamental rights, we find:
For each of these rights, the goal is not only to avoid degrading them, but also to improve and push toward consolidating these principles.
An AI that upholds ethical principles is a necessary condition for establishing trust. In the same way as the legal aspect, AI is expected not only to not degrade ethical principles, but to strengthen them. Put simply, not only must AI not harm humans, but it must help create a better world. Among the ethical principles that AI must respect, we find:
These ethical principles closely overlap with respect for fundamental rights, especially in Europe. However, even though it may seem like repetition, the foundation of these principles is different. We're not just talking about an AI that stays within legal limits, but about an active commitment from AI creators. That's how the ethical dimension complements the legal one. Proactivity from stakeholders is required.
The ethical dimension also adds an element of transparency to artificial intelligence. Not knowing how an AI works or what data it was trained on makes it impossible to trust it.
One point I haven't mentioned: it may be impossible to respect all of these principles simultaneously. In such cases, trade-offs must be made, internally or otherwise, about the chosen solution and how to work around the problem. This trade-off is a conscious decision, and in the most extreme cases, it may lead to the conclusion that the AI simply should not be built.
Another essential characteristic of a sustainable AI is its robustness. There are two aspects to robustness: the technical side and the social side.
The first concerns everything related to the AI's actual functioning: its training data, the underlying infrastructure, etc. First and foremost, a sustainable AI must produce reliable and reproducible results. This doesn't mean it must be perfectly deterministic (same inputs producing the exact same output), but the results must have a similar meaning. These results must also be consistent with the inputs provided.
Furthermore, AIs must not disclose personal data. Generally speaking, the data handled by AIs must respect privacy and be secured so that it never leaks. Similarly, accessing an AI must not allow unauthorized access to the underlying infrastructure. These are standard software security principles, but it makes sense that they apply to AI as well.
On the social side of AI, the AI must not be open to misuse by bad actors. This is another standard software security principle. Another criterion for the social robustness of AI is accountability in its design. A sustainable AI must be auditable by an independent body. It must be possible to analyze the AI's design and report on whether it was properly built. The trade-offs made on ethical principles must be documented for audit purposes. Responsibility for each use of the AI cannot fall 100% on its creators, but they must be aware that a significant share of responsibility rests with them.
The previous points focused on the intrinsic qualities of AI, its properties. This one takes into account the business context of AI. Today, the organizations capable of building AI - those with the infrastructure, data, and necessary expertise - are limited to a handful. If a competitor to these organizations wants to emerge, they often need backing from a major tech company like a GAFAM to achieve significant success. We therefore need to work toward tools that make it easier to create AI, to democratize it.
Access to data is also restricted to the big tech companies that collect vast amounts of it. Since the quality of AI models largely depends on the quality of the data, these companies have a competitive advantage. That's why we need to find solutions to diversify the AI market. Mozilla proposes the creation of organizations whose purpose would be to centralize data, anonymize it, and provide access to AI-building companies. It's one possible solution among others.
The need for a diversified market also comes from the fact that a market dominated by too few players will foster a climate of distrust toward it.
The need for trustworthy AIs is beyond doubt if we want the field to prosper and address the general public's concerns. The criteria outlined in this article represent a very European perspective, focused on humanist values. Wanting to build sustainable AIs is a challenge for the future, and the ways to achieve it are still being defined. However, these criteria offer a glimpse of a possible future for artificial intelligence, one where AIs would be truly worthy of trust.
The implementation of these principles is still a challenge to be solved, and it's something I'd like to continue addressing on this blog.
Pilier de Lamalo, Yohann allie expertise technique et pédagogie. Archi dans l'âme, développeur de talent, il apporte son énergie et ses compétences à la scale-up Lamalo. Pédagogue, il n'hésite pas à partager son savoir.
LinkedInGet our best articles every month.
Le premier produit propre de Reboot Conseil. Une solution innovante née de la collaboration.
ProjectCréer une plateforme IA accessible sur web et mobile. Un projet combinant orchestration IA et mobilité.
ProjectRéduire le délai de conception bijoutière de 8 jours à 20 minutes grâce à l'IA générative et la modélisation 3D.
TrainingMaîtrisez les APIs, intégrez l'IA dans vos applications. Embeddings, fine-tuning, function calling.
ServiceFormateurs opérationnels. IA, data science, développement web. Certifié Qualiopi.
ArticlePère Castor, raconte-moi N8N N8N (prononcez « n-huit-n » ou « nodemation » si vous voulez faire classe). C'est un outil qui permet de connecter vos...