On the blog, and in the series of talks we've just given, we've discussed at length the potential benefits and risks of Artificial Intelligence. It's a topic we're passionate about, and personally, I love discussing the possibilities and the future we can build with this wonderful tool that is AI.
However, to realize this ambition of building a better world with AI, we need to establish a robust framework for creating and using AI systems. And that's where ISO and the ISO 42001 standard on AI management systems come in.
For the few people who may not know, ISO is an international organization responsible for standardization. To put it simply, it's the body that defines standards and norms on a wide range of topics, from business management to food safety, information security, and more. Organizations can then take these standards and implement processes to comply with them, then get audited to obtain the corresponding certification. In France, AFNOR is the ISO member body. They participate on behalf of France in developing standards and certify organizations that request it. This audit isn't free, but ISO certification is widely recognized and serves as a mark of quality.
As I wrote above, the standard I want to talk about today is 42001 - "Artificial Intelligence Management System". The goal of this article is to provide an overview of the different elements of this standard and its overall framework.
The purpose of ISO 42001 is to provide organizations with a framework for implementing and integrating artificial intelligence in a responsible and ethical manner. To achieve this, the standard requires organizations to comply with a certain number of clauses it defines. There are 10 in total, but we'll cover 7 here. The other 3 (which are also the first ones) serve to set the framework for the standard and define the terms and definitions used throughout.
In this clause, the objective is to analyze and document the context in which our organization operates. What we call context includes: what is the state of AI adoption among competitors? Who are the different stakeholders in AI systems? What regulations are in force? What ethical framework must the organization operate within? And so on. In short, we analyze the environment in which the company operates.
Here, leadership refers to the organization's executive management. This clause defines what the role of management is in AI governance. It helps build the company's AI vision and strategy, its priorities, and the direction to take. Naturally, this vision and strategy must respect the principles of responsible and ethical AI.
This is also where we define the company's AI policy and how the organization proactively communicates about it.
The purpose of planning, within the standard, is to plan risk management, risk treatment, and assess the impact of AI in various scenarios. Several types of risks are defined, such as performance, security, and legal risks. It also provides the framework to follow for proper risk management planning and change management.
The support clause concerns the implementation of the necessary resources to ensure the AI management system functions properly. This covers both technical resources, such as the AI infrastructure, and the skills base needed to maintain it. It also addresses topics like employee AI awareness, communication, and internal documentation.
As the name suggests, the objective of this clause is to manage AI-related operations. One of the questions it aims to answer is: how do we manage the lifecycle of projects that include AI? Whether it's the various phases of design, development, or deployment, the standard requires compliance with its requirements for these phases.
This point tackles the performance evaluation of AI. Now that the standard has shown how we should design and manage our management system, the question is whether it's effective. The standard requires internal audit processes and management system reviews.
Once we're able to evaluate the management system, the standard addresses topics of continuous improvement. The objective is to implement corrective and preventive actions to continuously improve the management system, and potentially revisit certain phases to realign with the company's strategy and compliance with the standard.
These clauses naturally depend on each other for proper implementation. They were written with the goal of representing a coherent approach. We analyze the environment, choose a strategy, then plan, execute, and improve. It's important to keep this pattern in mind, which personally reminds me a lot of project delivery phases, from ideation all the way to production.
At Reboot, even though we're not (yet!) ISO 42001 certified, this standard inspires us greatly in setting up our own processes for managing AI. The ISO 42001 standard is quite demanding and requires a lot of documentation and processes to implement. The picture it paints of organizational structure is very hierarchical, which doesn't quite fit with our Teal (Opale) culture, which is more horizontal.
However, in my view, that's not a barrier to obtaining certification under this standard. It remains a long-term goal for our Squad, and we need to adapt these clauses and principles to our organization. Because even though the ISO 42001 vision of how a company should be organized isn't exactly how we operate, the principles it sets out remain universal and still apply. And above all, this standard covers many aspects we want to have in our AI management, and it seems very comprehensive on the subject.
Pilier de Lamalo, Yohann allie expertise technique et pédagogie. Archi dans l'âme, développeur de talent, il apporte son énergie et ses compétences à la scale-up Lamalo. Pédagogue, il n'hésite pas à partager son savoir.
LinkedInGet our best articles every month.
Débloquer la valeur cachée dans des milliers de documents. Un projet bancaire qui transforme la recherche documentaire en quelques secondes.
ProjectModerniser une DSI complète. Un tech lead pilotant la transformation d'une équipe.
ProjectSensibiliser aux risques IA bancaire. Un projet pédagogique démontrant 9 vulnérabilités LLM.
ProjectDébloquer l'extraction de données hétérogènes. Un projet utilisant l'IA multimodale pour 9 marques.
ProjectLever le frein de la confidentialité pour permettre l'adoption de l'IA dans un cabinet juridique.
ProjectStructurer l'innovation d'un cabinet de conseil pour transformer l'énergie créative en croissance durable.