- Introduction to ISO 42001 Requirements
- How the Standard Is Structured
- Clause 4: Context of the Organization
- Clause 5: Leadership
- Clause 6: Planning
- Clause 7: Support
- Clause 8: Operation
- Clause 9: Performance Evaluation
- Clause 10: Improvement
- Annex A Controls Overview
- Annex B Implementation Guidance
- Getting Started with Compliance
1. Introduction to ISO 42001 Requirements
Organizations adopting artificial intelligence face a fundamental challenge: how do you govern something that evolves as fast as AI does? The answer, increasingly, is a structured management system approach. ISO/IEC 42001:2023 provides exactly that — a comprehensive set of requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS).
Understanding the specific requirements of ISO 42001 is the first step toward certification. Whether you are a CTO evaluating the effort involved, a compliance officer mapping regulatory obligations, or a consultant guiding clients through implementation, this article provides a detailed, clause-by-clause breakdown of everything the standard requires.
ISO 42001 was published on December 18, 2023, by ISO/IEC JTC 1/SC 42, the international subcommittee dedicated to artificial intelligence standards. It is the world's first certifiable standard for AI management, and it applies to any organization that develops, provides, or uses AI-based products and services — regardless of size, sector, or geography.
ISO 42001 contains seven requirement clauses (Clauses 4 through 10), a normative Annex A with AI-specific controls, and an informative Annex B with implementation guidance. Clauses 1 through 3 cover scope, normative references, and definitions and are not auditable requirements.
2. How the Standard Is Structured
ISO 42001 follows the Annex SL harmonized high-level structure that underpins all modern ISO management system standards, including ISO 27001 (information security), ISO 9001 (quality), and ISO 14001 (environmental management). This shared structure means that organizations already certified to other ISO standards will recognize the framework immediately.
The standard is organized into the following components:
- Clauses 1-3: Scope, normative references, and terms and definitions. These clauses provide context but do not contain auditable requirements.
- Clauses 4-10: The certifiable requirements, covering context, leadership, planning, support, operation, performance evaluation, and improvement.
- Annex A (normative): A catalogue of AI-specific control objectives and controls that organizations select and apply based on their risk assessment.
- Annex B (informative): Detailed implementation guidance for each Annex A control, providing practical advice on how to put controls into practice.
- Annex C (informative): Potential AI-related organizational objectives and risk sources to help organizations identify what matters in their specific context.
- Annex D (informative): Guidance on the use of the AI management system across domains and sectors.
The Annex SL structure is significant for practical reasons. If your organization already operates an ISO 27001 information security management system, the AIMS can be integrated with it. Shared elements like management review, internal audit, document control, and corrective action processes do not need to be duplicated. This makes the path to ISO 42001 certification considerably more efficient for organizations with existing management systems.
3. Clause 4: Context of the Organization
Clause 4 is foundational. It requires the organization to look inward and outward to understand the environment in which its AIMS will operate. Without this contextual understanding, the rest of the management system lacks direction.
4.1 Understanding the Organization and Its Context
The organization must determine the external and internal issues that are relevant to its purpose and that affect its ability to achieve the intended outcomes of the AIMS. External issues might include AI-specific regulations (such as the EU AI Act), industry standards, societal expectations about AI ethics, and the competitive landscape. Internal issues might include the organization's AI maturity, technical capabilities, corporate culture, and existing governance structures.
Practical example: A healthcare organization deploying AI-assisted diagnostics would identify external issues like medical device regulations, patient safety expectations, and national health data laws. Internal issues might include the technical expertise of the radiology team, the organization's risk appetite, and existing clinical governance processes.
4.2 Understanding the Needs and Expectations of Interested Parties
The organization must identify interested parties relevant to the AIMS and determine their requirements. Interested parties are any individuals, groups, or organizations that can affect, be affected by, or perceive themselves to be affected by AI-related decisions and activities.
Common interested parties include customers, end users, employees, regulators, shareholders, AI vendors, data subjects, and affected communities. For each interested party, the organization determines what they need and expect from the AIMS — and which of those requirements will be addressed through the management system.
4.3 Determining the Scope of the AIMS
The organization must define the boundaries and applicability of the AIMS. The scope statement specifies which AI activities, systems, products, and services are covered, and which organizational units are included. The scope must be documented and available to interested parties.
Practical example: A financial technology company might scope its AIMS to cover "AI-based credit decisioning and fraud detection systems developed and operated by the Data Science and Risk Engineering divisions." This clearly defines what is in scope and what is not.
4.4 AI Management System
Finally, the organization must establish, implement, maintain, and continually improve the AIMS, including the processes needed and their interactions. This is the requirement that brings the entire management system to life — it is not enough to document policies; the organization must operate a functioning system.
4. Clause 5: Leadership
Clause 5 places explicit responsibility on top management to drive the AIMS. This is not a delegatable requirement — the standard demands visible executive engagement.
5.1 Leadership and Commitment
Top management must demonstrate leadership by ensuring the AI policy and AIMS objectives are established and are compatible with the organization's strategic direction. They must ensure that AIMS requirements are integrated into business processes, that adequate resources are provided, and that the AIMS achieves its intended outcomes. Leadership must also promote continual improvement and support relevant management roles in demonstrating their leadership.
Practical example: The CEO of an AI platform company includes AIMS performance as a standing item on the quarterly board agenda. The CTO is formally assigned as the AIMS executive sponsor, with authority over AI governance decisions and a dedicated budget for AI risk management activities.
5.2 AI Policy
Top management must establish an AI policy that is appropriate to the organization's purpose, provides a framework for setting AIMS objectives, includes commitments to satisfy applicable requirements, and includes a commitment to continual improvement. The policy must be documented, communicated within the organization, and available to relevant interested parties.
The AI policy is a strategic document, not a technical procedure. It should articulate the organization's commitment to responsible AI, address ethical principles such as fairness, transparency, and accountability, and set the tone for how AI decisions are governed across the organization. Many organizations publish their AI policy externally to demonstrate commitment to stakeholders.
5.3 Organizational Roles, Responsibilities, and Authorities
Top management must ensure that responsibilities and authorities for relevant roles are assigned, communicated, and understood. This includes ensuring that the AIMS conforms to the standard's requirements and that AIMS performance is reported to top management. AI governance is inherently cross-functional, requiring coordination between technology, legal, compliance, operations, and business leadership.
5. Clause 6: Planning
Clause 6 is where the organization translates its contextual understanding into concrete plans for managing AI risks and pursuing AI-related opportunities.
6.1 Actions to Address Risks and Opportunities
The organization must consider the issues from Clause 4.1 and the requirements from Clause 4.2, and determine risks and opportunities that need to be addressed. The organization then plans actions to address these risks and opportunities, integrates them into AIMS processes, and evaluates their effectiveness.
6.1.2 AI Risk Assessment
This is a critical AI-specific requirement. The organization must define and apply an AI risk assessment process that establishes risk criteria (including acceptance criteria), identifies AI risks, analyzes their likelihood and impact, evaluates whether they are acceptable, and prioritizes them for treatment. The risk assessment must consider risks to individuals, groups, organizations, and society — not just risks to the organization itself.
Practical example: A recruitment platform using AI for resume screening conducts a risk assessment that identifies bias risk (the AI might unfairly disadvantage certain demographic groups), transparency risk (candidates cannot understand why they were rejected), and data quality risk (training data may not represent the applicant population). Each risk is scored for likelihood and impact, and treatment plans are developed.
6.1.3 AI Risk Treatment
For each identified risk, the organization must select appropriate risk treatment options and determine the controls necessary to implement those options. The controls selected must be compared against the Annex A controls to ensure nothing critical has been overlooked. The organization then produces a Statement of Applicability documenting which Annex A controls are applied, which are excluded, and the justification for each decision.
6.2 AI Objectives and Planning to Achieve Them
The organization must establish measurable AIMS objectives at relevant functions and levels. Objectives must be consistent with the AI policy, measurable, monitored, communicated, and updated as appropriate. For each objective, the organization determines what will be done, what resources are needed, who is responsible, when it will be completed, and how results will be evaluated.
6. Clause 7: Support
Clause 7 ensures the AIMS has the necessary infrastructure, people, knowledge, and documentation to function effectively.
7.1 Resources
The organization must determine and provide the resources needed for the establishment, implementation, maintenance, and continual improvement of the AIMS. This includes human resources with appropriate AI expertise, technology infrastructure, budget allocation, and any external services or tools needed.
7.2 Competence
Persons doing work that affects AIMS performance must be competent on the basis of education, training, or experience. The organization must determine the necessary competence, ensure it is achieved, take action to acquire it where gaps exist, and retain documented evidence of competence. For an AI management system, competence requirements extend beyond traditional IT skills to include data science, AI ethics, impact assessment, and domain-specific expertise.
7.3 Awareness
Persons working under the organization's control must be aware of the AI policy, their contribution to the AIMS effectiveness, the benefits of improved AIMS performance, and the implications of not conforming to AIMS requirements. Awareness is not achieved through a single training session — it requires ongoing communication and engagement.
7.4 Communication
The organization must determine the internal and external communications relevant to the AIMS, including what to communicate, when, with whom, and how. This covers everything from internal reporting on AI risks to external communication with regulators and affected stakeholders.
7.5 Documented Information
The AIMS must include documented information required by the standard and any additional documentation the organization determines is necessary. This includes creating, updating, and controlling documents — ensuring they are properly identified, stored, protected, and available where needed. Key documents include the AI policy, risk assessments, Statement of Applicability, AI impact assessments, operational procedures, and records of management decisions.
ISO 42001 does not prescribe a specific documentation format or structure. Organizations can use their existing document management systems and templates. The key is that documentation is complete, current, and accessible. Organizations already certified to ISO 27001 can extend their existing document control framework to cover AIMS documentation.
7. Clause 8: Operation
Clause 8 is the operational core of the standard. It governs how AI governance works in day-to-day practice, from risk assessments through the AI system lifecycle.
8.1 Operational Planning and Control
The organization must plan, implement, and control the processes needed to meet AIMS requirements and implement the actions determined in Clause 6. This includes establishing criteria for processes, implementing control of processes in accordance with those criteria, and maintaining documented information to demonstrate that processes are carried out as planned.
8.2 AI Risk Assessment
The organization must perform AI risk assessments at planned intervals or when significant changes are proposed or occur. This is not a one-time exercise. As AI systems evolve, new data sources are introduced, or the operating context changes, risk assessments must be updated to reflect the current reality. Results must be documented and retained.
Practical example: A logistics company using AI for route optimization reassesses risks quarterly and whenever a new geographic market is added. The reassessment considers new data sources, different regulatory environments, and the impact of AI-driven decisions on drivers and customers in the new market.
8.3 AI Risk Treatment
The organization must implement the AI risk treatment plan and retain documented information on the results. This means the controls selected during planning must actually be deployed, monitored, and maintained. If a control is not working as intended, the risk treatment plan must be revised.
8.4 AI Impact Assessment
This is one of the most distinctive and important requirements in ISO 42001. The organization must conduct AI impact assessments for AI systems within the scope of the AIMS. The impact assessment goes beyond traditional risk assessment by explicitly evaluating the potential effects of AI systems on individuals, groups, and society.
Impact assessments must consider fairness and non-discrimination, transparency and explainability, accountability, privacy and data protection, human autonomy and oversight, environmental impact, and social well-being. The results inform risk treatment decisions and help organizations prioritize their governance efforts.
The AI impact assessment requirement in ISO 42001 aligns closely with the EU AI Act's fundamental rights impact assessment for high-risk AI systems. Organizations implementing ISO 42001 impact assessments are building the capability and evidence base needed for EU AI Act compliance.
8.5 AI System Lifecycle
The organization must manage AI systems across their entire lifecycle, from initial conception and design through development, testing, deployment, operation, monitoring, and eventual retirement or decommissioning. At each lifecycle stage, appropriate controls must be in place. This requirement ensures that AI governance is not a one-time event but a continuous process that accompanies the AI system throughout its operational life.
8. Clause 9: Performance Evaluation
Clause 9 ensures the organization knows whether its AIMS is working as intended and where improvements are needed.
9.1 Monitoring, Measurement, Analysis, and Evaluation
The organization must determine what needs to be monitored and measured, the methods for doing so, when monitoring and measurement will be performed, and when results will be analyzed and evaluated. For an AIMS, this might include monitoring AI system performance metrics, tracking risk treatment effectiveness, measuring compliance with the AI policy, and evaluating the outcomes of AI impact assessments.
Practical example: A customer service organization using AI chatbots monitors false positive rates, customer escalation frequency, bias indicators across demographic groups, and user satisfaction scores. These metrics feed into quarterly AIMS performance reports reviewed by management.
9.2 Internal Audit
The organization must conduct internal audits at planned intervals to verify that the AIMS conforms to the organization's own requirements, the requirements of ISO 42001, and is effectively implemented and maintained. Auditors must be objective and impartial — they cannot audit their own work. Audit results must be reported to relevant management, and documented information must be retained as evidence of the audit programme and results.
9.3 Management Review
Top management must review the AIMS at planned intervals to ensure its continuing suitability, adequacy, and effectiveness. The management review must consider the status of actions from previous reviews, changes in external and internal issues, AIMS performance information (including nonconformities, monitoring results, audit outcomes, and objective achievement), and opportunities for improvement. Outputs must include decisions related to continual improvement and any changes needed to the AIMS.
9. Clause 10: Improvement
Clause 10 drives the AIMS forward. It requires the organization not just to fix problems but to actively pursue better AI governance over time.
10.1 Nonconformity and Corrective Action
When a nonconformity occurs (a failure to meet a requirement), the organization must react to it, evaluate the need for action to eliminate the root cause so it does not recur, implement any action needed, review the effectiveness of corrective actions, and make changes to the AIMS if necessary. All nonconformities and corrective actions must be documented.
Practical example: An internal audit reveals that AI impact assessments have not been updated after a major model retraining event. The corrective action addresses the root cause (lack of a trigger mechanism linking model updates to impact assessment reviews) by implementing an automated notification process that flags AI system changes requiring governance review.
10.2 Continual Improvement
The organization must continually improve the suitability, adequacy, and effectiveness of the AIMS. This goes beyond fixing nonconformities — it means proactively seeking ways to enhance AI governance based on lessons learned, technological developments, evolving stakeholder expectations, and changes in the regulatory landscape.
10. Annex A Controls Overview
Annex A is a normative annex that provides a comprehensive catalogue of AI-specific controls. Unlike the main clauses (which every organization must meet), Annex A controls are selected based on the organization's risk assessment and documented in the Statement of Applicability.
The Annex A controls are organized into the following categories:
- A.2 — Policies for AI: Establishing and communicating AI-specific policies aligned with organizational strategy and applicable requirements.
- A.3 — Internal Organization: Defining roles, responsibilities, and governance structures for AI management, including cross-functional coordination.
- A.4 — Resources for AI Systems: Ensuring adequate human, technical, and financial resources for the development and operation of AI systems.
- A.5 — Assessing Impacts of AI Systems: Conducting systematic assessments of AI system impacts on individuals, groups, organizations, and society.
- A.6 — AI System Lifecycle: Managing AI systems through all lifecycle phases with appropriate governance at each stage.
- A.7 — Data for AI Systems: Governing data acquisition, quality, provenance, preparation, and bias management for AI systems.
- A.8 — Information for Interested Parties of AI Systems: Ensuring transparency and providing appropriate information to stakeholders about AI system capabilities, limitations, and decision-making processes.
- A.9 — Use of AI Systems: Governing the deployment, operation, monitoring, and use of AI systems, including human oversight requirements.
- A.10 — Third-Party and Customer Relationships: Managing AI-related risks in supplier, partner, and customer relationships across the AI value chain.
For a detailed walkthrough of each control objective and its implementation, see our complete Annex A controls reference guide.
The Statement of Applicability (SoA) is a mandatory document that lists all Annex A controls, states whether each one is applied or excluded, provides justification for exclusions, and references the implementation details. The SoA is a key audit artifact — auditors review it to verify that the organization's control selection is appropriate given its risk assessment results.
11. Annex B Implementation Guidance
Annex B is an informative annex that provides detailed implementation guidance for each Annex A control. While Annex A tells you what to control, Annex B tells you how to implement those controls in practice.
Annex B guidance covers topics such as:
- How to structure AI governance committees and reporting lines
- Practical approaches to AI risk assessment methodologies
- Data governance practices including quality metrics, bias detection, and lineage tracking
- Model validation and testing strategies
- Transparency mechanisms such as model cards and datasheets
- Human oversight arrangements appropriate to different AI risk levels
- Incident management processes for AI system failures
- Supply chain due diligence for third-party AI components
Organizations implementing ISO 42001 for the first time will find Annex B invaluable. It bridges the gap between the "what" of the requirements and the "how" of practical implementation, drawing on international best practices in AI governance.
12. Getting Started with Compliance
Understanding the requirements is the first step. Implementing them is the journey. Here is a practical approach to getting started:
- Assess your current state. Take the free AI readiness assessment at baltum.ai to understand your organization's current AI governance maturity and identify the most significant gaps.
- Define your scope. Determine which AI systems, activities, and organizational units will be covered by the AIMS. Start with your highest-risk AI applications.
- Leverage existing systems. If you already have ISO 27001, ISO 9001, or other management system certifications, build the AIMS as an extension of your existing framework rather than creating a parallel system.
- Conduct risk assessments. Apply the AI risk assessment process to your in-scope AI systems, using Annex C as a guide to identify relevant risk sources.
- Select and implement controls. Use Annex A to select appropriate controls and Annex B for implementation guidance. Document your decisions in the Statement of Applicability.
- Operate and measure. Run the AIMS in practice, monitor its effectiveness, conduct internal audits, and hold management reviews.
- Pursue certification. When your AIMS is operational, engage with BALTUM's certification process for independent verification.
ISO 42001 is not about perfection. It is about building a systematic, risk-based approach to AI governance that improves over time. The requirements are demanding but achievable, and the benefits — from stakeholder trust to regulatory readiness — are substantial.
Ready to begin? Start with the free assessment at baltum.ai and get a clear picture of what your organization needs to achieve ISO 42001 certification.