Table of Contents
  1. Understanding Annex A
  2. How Annex A Controls Work
  3. A.2 — Policies for AI
  4. A.3 — Internal Organization
  5. A.4 — Resources for AI Systems
  6. A.5 — Assessing Impacts of AI Systems
  7. A.6 — AI System Lifecycle
  8. A.7 — Data for AI Systems
  9. A.8 — Information for Interested Parties
  10. A.9 — Use of AI Systems
  11. A.10 — Third-Party and Customer Relationships
  12. Building Your Statement of Applicability
  13. Next Steps

1. Understanding Annex A

Annex A of ISO/IEC 42001:2023 is a normative annex that provides a comprehensive catalogue of AI-specific control objectives and controls. It is one of the most distinctive and valuable elements of the standard, giving organizations a structured reference for implementing governance over their AI activities.

If you are familiar with ISO 27001, Annex A serves a similar purpose to the information security controls in that standard. Just as ISO 27001 Annex A provides controls for information security risks, ISO 42001 Annex A provides controls for AI-related risks. The key difference is that these controls are purpose-built for the unique challenges of artificial intelligence — from data governance and model lifecycle management to transparency, fairness, and societal impact.

Annex A does not operate in isolation. It is integrated with the main requirement clauses of the standard through the risk assessment and treatment process defined in Clause 6. Organizations conduct an AI risk assessment, determine which risks need treatment, select appropriate controls from Annex A (and beyond), and document their decisions in the Statement of Applicability.

Normative vs. Informative

Annex A is normative, meaning it is a mandatory part of the standard that auditors assess during certification. However, not every Annex A control must be implemented — the organization selects controls based on its risk assessment. What the organization must do is consider every control, document whether it applies, and justify any exclusions in the Statement of Applicability.

Annex B is informative (non-mandatory) and provides detailed implementation guidance for each Annex A control. Think of Annex A as the "what" and Annex B as the "how."

2. How Annex A Controls Work

Annex A is organized into nine control categories (A.2 through A.10), each containing one or more control objectives and specific controls. The numbering starts at A.2 because A.1 is reserved for the introduction to the annex.

Each control category addresses a specific domain of AI governance. Within each category, control objectives state the high-level goal, and individual controls specify the measures the organization should implement to achieve that objective.

The control selection process follows these steps:

  1. Conduct AI risk assessment (per Clause 6.1.2) to identify AI-related risks
  2. Determine risk treatment options (per Clause 6.1.3) for each identified risk
  3. Select controls from Annex A (and any additional controls the organization determines are necessary) to implement the chosen treatment options
  4. Compare selected controls against the full Annex A list to verify that no necessary controls have been overlooked
  5. Produce a Statement of Applicability documenting all Annex A controls, their applicability status, and justification for any exclusions

Let us now examine each control category in detail.

3. A.2 — Policies for AI

Control Objective

To provide management direction and support for AI activities in accordance with organizational requirements, relevant regulations, and ethical principles.

Key Controls

This category requires the organization to establish AI-specific policies that are approved by management, communicated to relevant parties, and reviewed at planned intervals. The AI policy must address the organization's approach to responsible AI, its ethical commitments, regulatory compliance obligations, and risk management principles.

The policies should be specific enough to guide decision-making but flexible enough to accommodate different AI use cases across the organization. They should address topics such as:

Implementation guidance: Start with a high-level AI policy that sets strategic direction, then develop subordinate policies or procedures for specific domains (e.g., AI data governance policy, AI model validation policy). Ensure policies are living documents that are reviewed when the regulatory landscape, technology, or organizational context changes.

4. A.3 — Internal Organization

Control Objective

To establish a governance framework for AI that defines roles, responsibilities, and accountability within the organization.

Key Controls

This category addresses the organizational structure needed to govern AI effectively. Controls cover:

Implementation guidance: Many organizations establish an AI governance committee or board that brings together representatives from technology, legal, compliance, operations, and business leadership. This committee reviews AI risk assessments, approves high-risk AI deployments, and oversees the AIMS. Individual AI systems should have designated risk owners who are accountable for the governance of their specific systems.

Practical Tip

Do not create a governance structure that exists only on paper. The most effective AI governance structures are those where real decisions are made — where the AI governance committee has the authority to approve, modify, or halt AI projects based on risk assessment outcomes. Governance without authority is governance without teeth.

5. A.4 — Resources for AI Systems

Control Objective

To ensure that adequate resources are available for the responsible development, deployment, and operation of AI systems.

Key Controls

This category covers the human, technical, and financial resources needed for effective AI governance:

Implementation guidance: Create a competence matrix that maps required skills to roles within the AIMS. Identify gaps and develop training plans. Consider both technical competence (data science, ML engineering) and governance competence (risk assessment, impact assessment, regulatory requirements). Document competence records as evidence for audits.

6. A.5 — Assessing Impacts of AI Systems

Control Objective

To systematically assess the potential impacts of AI systems on individuals, groups, organizations, and society.

Key Controls

This is one of the most important and distinctive control categories in ISO 42001. It requires organizations to conduct AI impact assessments that go beyond traditional risk assessment by evaluating effects on external stakeholders:

Implementation guidance: Develop an impact assessment template that covers all required dimensions. Integrate impact assessments into the AI system development lifecycle so they are conducted before deployment, not after. Establish clear escalation paths for high-impact findings — if an assessment reveals significant potential harm, governance decision-makers must be informed promptly.

EU AI Act Connection

The impact assessment controls in ISO 42001 align closely with the EU AI Act's fundamental rights impact assessment requirements for high-risk AI systems. Organizations that implement robust ISO 42001 impact assessments are building the capability and evidence base needed for EU AI Act compliance — a significant advantage as enforcement deadlines approach.

7. A.6 — AI System Lifecycle

Control Objective

To ensure appropriate governance of AI systems throughout their entire lifecycle, from design through retirement.

Key Controls

This category addresses the end-to-end lifecycle governance of AI systems:

Implementation guidance: Create a lifecycle governance framework that defines the governance gates an AI system must pass through at each stage. Use a risk-proportionate approach — high-risk AI systems require more rigorous governance at each gate, while lower-risk systems may follow a streamlined path. Integrate lifecycle governance with existing software development processes (CI/CD, DevOps) to avoid creating parallel processes.

8. A.7 — Data for AI Systems

Control Objective

To ensure that data used in AI systems is governed appropriately throughout its lifecycle, supporting the responsible development and operation of AI.

Key Controls

Data is the foundation of AI systems, and this category addresses comprehensive data governance:

Implementation guidance: Build data governance into the AI development pipeline. Use data cards or datasheets to document the characteristics, provenance, and intended use of each dataset. Establish data quality metrics and thresholds that must be met before data is used in production AI systems. Implement automated bias detection tools where feasible.

Data Governance Is AI Governance

The quality and governance of data used in AI systems has a direct, measurable impact on AI system outcomes. Poor data governance is one of the most common root causes of AI failures, bias incidents, and performance degradation. Investing in data controls is not just a compliance exercise — it is a direct investment in AI system reliability and trustworthiness.

9. A.8 — Information for Interested Parties

Control Objective

To ensure that appropriate information about AI systems is provided to interested parties, supporting transparency and informed decision-making.

Key Controls

This category addresses transparency and communication about AI systems:

Implementation guidance: Develop tiered transparency approaches — different stakeholders need different types of information. End users may need simple, plain-language explanations. Regulators may need technical documentation. Business clients may need performance reports and audit results. Use model cards or AI system documentation templates to standardize how information is recorded and communicated.

10. A.9 — Use of AI Systems

Control Objective

To ensure that AI systems are used appropriately, with adequate human oversight and in accordance with their intended purpose.

Key Controls

This category governs the operational use of AI systems:

Implementation guidance: Create an AI system registry that documents every AI system in scope, its intended use, risk level, oversight model, and monitoring arrangements. Establish alert thresholds for monitoring metrics so that deviations are detected and escalated promptly. Train operational staff to recognize AI system issues and follow incident response procedures.

11. A.10 — Third-Party and Customer Relationships

Control Objective

To manage AI-related risks in relationships with third parties (suppliers, partners, service providers) and customers across the AI value chain.

Key Controls

AI systems rarely operate in isolation — they depend on third-party components, services, and data. This category addresses supply chain governance:

Implementation guidance: Develop a supplier assessment questionnaire that covers AI-specific governance topics. Include AI governance clauses in procurement templates. Establish a third-party risk register for AI components. For organizations that provide AI to customers, create clear documentation of shared responsibilities for AI governance.

Supply Chain Risk

Third-party AI components are a growing source of risk. When you use a pre-trained model from a vendor, integrate an AI API from a cloud provider, or rely on a third party for training data, you inherit their AI governance risks. Annex A.10 controls ensure that you assess, manage, and monitor these inherited risks — because your certification depends on governing AI across your entire value chain, not just the systems you build yourself.

12. Building Your Statement of Applicability

The Statement of Applicability (SoA) is a mandatory document that connects your risk assessment to your control selection. It lists every Annex A control and documents:

The SoA is a key audit artifact. During certification, auditors review the SoA to verify that:

  1. The organization has considered every Annex A control
  2. Control selections are justified by the risk assessment
  3. Exclusions are reasonable and properly documented
  4. Applied controls are actually implemented and operating

Practical tip: Do not treat the SoA as a checkbox exercise. A well-crafted SoA tells the story of your AI governance — it shows auditors that you understand your AI risks and have thoughtfully selected controls to address them. Keep the SoA as a living document that is updated whenever your risk profile, AI systems, or organizational context changes.

13. Next Steps

Understanding Annex A controls is essential for implementing an effective AIMS and achieving ISO 42001 certification. Here is how to put this knowledge into action:

  1. Assess your readiness. Take the free assessment at baltum.ai to evaluate your current AI governance posture against ISO 42001 requirements.
  2. Review the requirements. Read our complete breakdown of ISO 42001 clauses to understand the full scope of what the standard requires.
  3. Plan your implementation. Use our AIMS implementation guide for a step-by-step approach to building your AI Management System.
  4. Understand the benefits. Review the key benefits of ISO 42001 certification to build the business case for investment.
  5. Start the certification journey. Follow the certification guide to understand the process from gap analysis to certificate issuance.
Annex A controls are the practical backbone of your AI governance. They translate the management system principles of ISO 42001 into specific, auditable measures that make responsible AI real and verifiable. Master them, and you master the operational core of AI governance.

Ready to get started? Complete the free assessment at baltum.ai and discover exactly where your organization stands on Annex A control readiness.