- Understanding Annex A
- How Annex A Controls Work
- A.2 — Policies for AI
- A.3 — Internal Organization
- A.4 — Resources for AI Systems
- A.5 — Assessing Impacts of AI Systems
- A.6 — AI System Lifecycle
- A.7 — Data for AI Systems
- A.8 — Information for Interested Parties
- A.9 — Use of AI Systems
- A.10 — Third-Party and Customer Relationships
- Building Your Statement of Applicability
- Next Steps
1. Understanding Annex A
Annex A of ISO/IEC 42001:2023 is a normative annex that provides a comprehensive catalogue of AI-specific control objectives and controls. It is one of the most distinctive and valuable elements of the standard, giving organizations a structured reference for implementing governance over their AI activities.
If you are familiar with ISO 27001, Annex A serves a similar purpose to the information security controls in that standard. Just as ISO 27001 Annex A provides controls for information security risks, ISO 42001 Annex A provides controls for AI-related risks. The key difference is that these controls are purpose-built for the unique challenges of artificial intelligence — from data governance and model lifecycle management to transparency, fairness, and societal impact.
Annex A does not operate in isolation. It is integrated with the main requirement clauses of the standard through the risk assessment and treatment process defined in Clause 6. Organizations conduct an AI risk assessment, determine which risks need treatment, select appropriate controls from Annex A (and beyond), and document their decisions in the Statement of Applicability.
Annex A is normative, meaning it is a mandatory part of the standard that auditors assess during certification. However, not every Annex A control must be implemented — the organization selects controls based on its risk assessment. What the organization must do is consider every control, document whether it applies, and justify any exclusions in the Statement of Applicability.
Annex B is informative (non-mandatory) and provides detailed implementation guidance for each Annex A control. Think of Annex A as the "what" and Annex B as the "how."
2. How Annex A Controls Work
Annex A is organized into nine control categories (A.2 through A.10), each containing one or more control objectives and specific controls. The numbering starts at A.2 because A.1 is reserved for the introduction to the annex.
Each control category addresses a specific domain of AI governance. Within each category, control objectives state the high-level goal, and individual controls specify the measures the organization should implement to achieve that objective.
The control selection process follows these steps:
- Conduct AI risk assessment (per Clause 6.1.2) to identify AI-related risks
- Determine risk treatment options (per Clause 6.1.3) for each identified risk
- Select controls from Annex A (and any additional controls the organization determines are necessary) to implement the chosen treatment options
- Compare selected controls against the full Annex A list to verify that no necessary controls have been overlooked
- Produce a Statement of Applicability documenting all Annex A controls, their applicability status, and justification for any exclusions
Let us now examine each control category in detail.
3. A.2 — Policies for AI
Control Objective
To provide management direction and support for AI activities in accordance with organizational requirements, relevant regulations, and ethical principles.
Key Controls
This category requires the organization to establish AI-specific policies that are approved by management, communicated to relevant parties, and reviewed at planned intervals. The AI policy must address the organization's approach to responsible AI, its ethical commitments, regulatory compliance obligations, and risk management principles.
The policies should be specific enough to guide decision-making but flexible enough to accommodate different AI use cases across the organization. They should address topics such as:
- Acceptable and unacceptable uses of AI within the organization
- Principles for fairness, transparency, and accountability in AI systems
- Requirements for AI risk assessment and impact assessment before deployment
- Data governance principles for AI systems
- Human oversight requirements for AI-driven decisions
Implementation guidance: Start with a high-level AI policy that sets strategic direction, then develop subordinate policies or procedures for specific domains (e.g., AI data governance policy, AI model validation policy). Ensure policies are living documents that are reviewed when the regulatory landscape, technology, or organizational context changes.
4. A.3 — Internal Organization
Control Objective
To establish a governance framework for AI that defines roles, responsibilities, and accountability within the organization.
Key Controls
This category addresses the organizational structure needed to govern AI effectively. Controls cover:
- Roles and responsibilities: Clearly defined and assigned roles for AI governance, including executive sponsorship, AI risk ownership, operational management, and oversight functions.
- Segregation of duties: Ensuring that conflicting duties and areas of responsibility are separated to reduce opportunities for unauthorized or unintentional misuse of AI systems.
- Cross-functional coordination: Establishing mechanisms for coordination between different parts of the organization involved in AI activities (technology, legal, compliance, business units, ethics).
Implementation guidance: Many organizations establish an AI governance committee or board that brings together representatives from technology, legal, compliance, operations, and business leadership. This committee reviews AI risk assessments, approves high-risk AI deployments, and oversees the AIMS. Individual AI systems should have designated risk owners who are accountable for the governance of their specific systems.
Do not create a governance structure that exists only on paper. The most effective AI governance structures are those where real decisions are made — where the AI governance committee has the authority to approve, modify, or halt AI projects based on risk assessment outcomes. Governance without authority is governance without teeth.
5. A.4 — Resources for AI Systems
Control Objective
To ensure that adequate resources are available for the responsible development, deployment, and operation of AI systems.
Key Controls
This category covers the human, technical, and financial resources needed for effective AI governance:
- Competence and skills: Ensuring that personnel involved in AI activities have the necessary technical skills, domain knowledge, and governance awareness. This includes data scientists, ML engineers, risk assessors, auditors, and business stakeholders.
- Training and awareness: Providing ongoing training on AI governance, ethics, and risk management to all personnel involved in AI activities. Awareness programs should extend beyond the AI team to include decision-makers who rely on AI outputs.
- Technical resources: Ensuring adequate computational infrastructure, tools, and platforms for developing, testing, deploying, and monitoring AI systems in accordance with governance requirements.
- Financial resources: Allocating sufficient budget for AI governance activities, including risk assessment, impact assessment, monitoring, auditing, and continuous improvement.
Implementation guidance: Create a competence matrix that maps required skills to roles within the AIMS. Identify gaps and develop training plans. Consider both technical competence (data science, ML engineering) and governance competence (risk assessment, impact assessment, regulatory requirements). Document competence records as evidence for audits.
6. A.5 — Assessing Impacts of AI Systems
Control Objective
To systematically assess the potential impacts of AI systems on individuals, groups, organizations, and society.
Key Controls
This is one of the most important and distinctive control categories in ISO 42001. It requires organizations to conduct AI impact assessments that go beyond traditional risk assessment by evaluating effects on external stakeholders:
- Impact assessment process: Establishing a documented process for assessing AI system impacts, including criteria for when assessments are required, what dimensions of impact must be evaluated, and how results inform governance decisions.
- Dimensions of impact: Assessing impacts across multiple dimensions including fairness and non-discrimination, transparency and explainability, privacy and data protection, human autonomy and oversight, safety and security, environmental sustainability, and societal well-being.
- Stakeholder engagement: Engaging with affected stakeholders (where feasible and appropriate) to understand their perspectives on AI system impacts.
- Documentation and review: Documenting impact assessment results, making them available to relevant decision-makers, and reviewing them at planned intervals or when significant changes occur.
Implementation guidance: Develop an impact assessment template that covers all required dimensions. Integrate impact assessments into the AI system development lifecycle so they are conducted before deployment, not after. Establish clear escalation paths for high-impact findings — if an assessment reveals significant potential harm, governance decision-makers must be informed promptly.
The impact assessment controls in ISO 42001 align closely with the EU AI Act's fundamental rights impact assessment requirements for high-risk AI systems. Organizations that implement robust ISO 42001 impact assessments are building the capability and evidence base needed for EU AI Act compliance — a significant advantage as enforcement deadlines approach.
7. A.6 — AI System Lifecycle
Control Objective
To ensure appropriate governance of AI systems throughout their entire lifecycle, from design through retirement.
Key Controls
This category addresses the end-to-end lifecycle governance of AI systems:
- Design and development: Establishing requirements for AI system design that incorporate governance considerations from the outset, including requirements specification, architectural decisions, model selection, and development standards.
- Testing and validation: Implementing systematic testing and validation processes to verify that AI systems perform as intended, meet defined requirements, and do not exhibit unacceptable behaviors. This includes functional testing, performance testing, fairness testing, robustness testing, and security testing.
- Deployment approval: Establishing a formal approval process for deploying AI systems into production, including review of risk assessment results, impact assessment outcomes, and test results by appropriate governance authorities.
- Monitoring and operation: Implementing ongoing monitoring of AI system performance, behavior, and impact during operation. This includes monitoring for model drift, data quality degradation, fairness metrics, and operational anomalies.
- Change management: Establishing processes for managing changes to AI systems (model updates, retraining, data changes, configuration changes) that include appropriate governance review and approval.
- Retirement and decommissioning: Defining processes for safely retiring AI systems, including data retention and disposal, stakeholder notification, and transition planning.
Implementation guidance: Create a lifecycle governance framework that defines the governance gates an AI system must pass through at each stage. Use a risk-proportionate approach — high-risk AI systems require more rigorous governance at each gate, while lower-risk systems may follow a streamlined path. Integrate lifecycle governance with existing software development processes (CI/CD, DevOps) to avoid creating parallel processes.
8. A.7 — Data for AI Systems
Control Objective
To ensure that data used in AI systems is governed appropriately throughout its lifecycle, supporting the responsible development and operation of AI.
Key Controls
Data is the foundation of AI systems, and this category addresses comprehensive data governance:
- Data acquisition and collection: Establishing controls over how data is acquired for AI systems, including consent, legal basis, provenance verification, and contractual rights. Organizations must ensure they have the right to use data for AI purposes.
- Data quality: Implementing processes to assess, maintain, and improve the quality of data used in AI systems. This includes accuracy, completeness, timeliness, consistency, and relevance. Data quality issues directly impact AI system performance and fairness.
- Data preparation and labeling: Governing how data is prepared, transformed, and labeled for use in AI systems. This includes documentation of data processing steps, quality assurance for labeled data, and validation of data preparation pipelines.
- Bias detection and management: Implementing processes to identify, measure, and mitigate bias in data used for AI systems. This includes statistical analysis of training data, testing for representativeness, and monitoring for bias that may emerge or shift over time.
- Data provenance and lineage: Maintaining records of where data comes from, how it has been processed, and how it is used across AI systems. Provenance tracking enables accountability and supports debugging when AI systems produce unexpected results.
- Data privacy and protection: Ensuring that personal data and sensitive data used in AI systems is handled in compliance with applicable privacy regulations and organizational policies.
Implementation guidance: Build data governance into the AI development pipeline. Use data cards or datasheets to document the characteristics, provenance, and intended use of each dataset. Establish data quality metrics and thresholds that must be met before data is used in production AI systems. Implement automated bias detection tools where feasible.
The quality and governance of data used in AI systems has a direct, measurable impact on AI system outcomes. Poor data governance is one of the most common root causes of AI failures, bias incidents, and performance degradation. Investing in data controls is not just a compliance exercise — it is a direct investment in AI system reliability and trustworthiness.
9. A.8 — Information for Interested Parties
Control Objective
To ensure that appropriate information about AI systems is provided to interested parties, supporting transparency and informed decision-making.
Key Controls
This category addresses transparency and communication about AI systems:
- AI system disclosure: Informing affected parties when they are interacting with or subject to an AI system. This includes disclosure that AI is being used, what the AI system does, and how it affects the individual.
- Explainability: Providing meaningful explanations of AI system decisions and outputs to affected parties, appropriate to the context and the audience. This does not necessarily mean technical model explanations — it means explanations that help stakeholders understand how a decision was reached and what factors influenced it.
- System documentation: Creating and maintaining documentation that describes AI system capabilities, limitations, intended use, known risks, and performance characteristics. This may take the form of model cards, system cards, or equivalent documentation.
- Communication of changes: Informing relevant interested parties when significant changes are made to AI systems that may affect them.
Implementation guidance: Develop tiered transparency approaches — different stakeholders need different types of information. End users may need simple, plain-language explanations. Regulators may need technical documentation. Business clients may need performance reports and audit results. Use model cards or AI system documentation templates to standardize how information is recorded and communicated.
10. A.9 — Use of AI Systems
Control Objective
To ensure that AI systems are used appropriately, with adequate human oversight and in accordance with their intended purpose.
Key Controls
This category governs the operational use of AI systems:
- Intended use and misuse prevention: Defining and documenting the intended use of each AI system and implementing measures to prevent reasonably foreseeable misuse. This includes usage guidelines, access controls, and monitoring for unauthorized or inappropriate use.
- Human oversight: Establishing appropriate levels of human oversight for AI systems based on their risk profile. High-risk systems may require human-in-the-loop (human makes the final decision), while lower-risk systems may use human-on-the-loop (human monitors and can intervene) or human-in-command (human sets parameters and reviews outcomes) approaches.
- AI system monitoring: Implementing continuous monitoring of AI systems during operation to detect performance degradation, drift, anomalies, or unintended behaviors. Monitoring should include technical metrics (accuracy, latency, error rates) and governance metrics (fairness indicators, compliance measures).
- Incident management: Establishing processes for identifying, reporting, investigating, and responding to AI-related incidents. This includes defining what constitutes an AI incident, establishing reporting channels, conducting root cause analysis, and implementing corrective actions.
Implementation guidance: Create an AI system registry that documents every AI system in scope, its intended use, risk level, oversight model, and monitoring arrangements. Establish alert thresholds for monitoring metrics so that deviations are detected and escalated promptly. Train operational staff to recognize AI system issues and follow incident response procedures.
11. A.10 — Third-Party and Customer Relationships
Control Objective
To manage AI-related risks in relationships with third parties (suppliers, partners, service providers) and customers across the AI value chain.
Key Controls
AI systems rarely operate in isolation — they depend on third-party components, services, and data. This category addresses supply chain governance:
- Supplier assessment: Evaluating third-party AI component providers (model vendors, data providers, cloud AI services, labeling services) against governance criteria before engagement. This includes assessing their AI governance practices, data handling, and reliability.
- Contractual requirements: Including appropriate AI governance requirements in contracts with suppliers and partners, covering data rights, model performance guarantees, transparency obligations, incident notification, and audit rights.
- Ongoing monitoring: Monitoring third-party AI components and services during the relationship to ensure they continue to meet governance requirements. This includes monitoring for changes in third-party practices, performance degradation, and emerging risks.
- Customer communication: When the organization provides AI systems or services to customers, ensuring that customers receive appropriate information about the AI system's capabilities, limitations, intended use, and governance requirements.
Implementation guidance: Develop a supplier assessment questionnaire that covers AI-specific governance topics. Include AI governance clauses in procurement templates. Establish a third-party risk register for AI components. For organizations that provide AI to customers, create clear documentation of shared responsibilities for AI governance.
Third-party AI components are a growing source of risk. When you use a pre-trained model from a vendor, integrate an AI API from a cloud provider, or rely on a third party for training data, you inherit their AI governance risks. Annex A.10 controls ensure that you assess, manage, and monitor these inherited risks — because your certification depends on governing AI across your entire value chain, not just the systems you build yourself.
12. Building Your Statement of Applicability
The Statement of Applicability (SoA) is a mandatory document that connects your risk assessment to your control selection. It lists every Annex A control and documents:
- Whether the control is applicable to the organization
- Whether the control is implemented
- The justification for including or excluding the control
- A reference to the implementation details (policy, procedure, or evidence)
The SoA is a key audit artifact. During certification, auditors review the SoA to verify that:
- The organization has considered every Annex A control
- Control selections are justified by the risk assessment
- Exclusions are reasonable and properly documented
- Applied controls are actually implemented and operating
Practical tip: Do not treat the SoA as a checkbox exercise. A well-crafted SoA tells the story of your AI governance — it shows auditors that you understand your AI risks and have thoughtfully selected controls to address them. Keep the SoA as a living document that is updated whenever your risk profile, AI systems, or organizational context changes.
13. Next Steps
Understanding Annex A controls is essential for implementing an effective AIMS and achieving ISO 42001 certification. Here is how to put this knowledge into action:
- Assess your readiness. Take the free assessment at baltum.ai to evaluate your current AI governance posture against ISO 42001 requirements.
- Review the requirements. Read our complete breakdown of ISO 42001 clauses to understand the full scope of what the standard requires.
- Plan your implementation. Use our AIMS implementation guide for a step-by-step approach to building your AI Management System.
- Understand the benefits. Review the key benefits of ISO 42001 certification to build the business case for investment.
- Start the certification journey. Follow the certification guide to understand the process from gap analysis to certificate issuance.
Annex A controls are the practical backbone of your AI governance. They translate the management system principles of ISO 42001 into specific, auditable measures that make responsible AI real and verifiable. Master them, and you master the operational core of AI governance.
Ready to get started? Complete the free assessment at baltum.ai and discover exactly where your organization stands on Annex A control readiness.