Artificial intelligence (AI) is transforming industries worldwide. Technologies like generative AI are revolutionizing finance, healthcare, manufacturing and customer service. While AI systems drive efficiency and innovation, they also present challenges in ethics, transparency and security. Organizations must adopt robust AI governance strategies to align with regulatory requirements and stakeholder expectations.

With increasing regulatory scrutiny, businesses need to proactively manage AI risks, including bias, data security and accountability. ISO/IEC 42001:2023 is the latest standard for an artificial intelligence management system (AIMS), offering a structured framework for AI governance. It helps organizations build trust, achieve AI compliance and align with international best practices. This standard ensures responsible development, deployment and operation which is a critical factor for successful AI adoption and broader digital transformation.

ISO/IEC 42001 sets the foundation for AI governance and regulatory alignment. It outlines key requirements to help organizations build a trustworthy AI management system. These include risk management, AI system impact assessment, system lifecycle management and third-party supplier oversight. Since its introduction in December 2023, this international standard has provided valuable guidance for responsible AI systems.

For Swiss companies, AI governance is especially vital as the EU AI Act and global regulations demand stricter compliance. Implementing an AI management system under ISO 42001 enables businesses to manage AI risk effectively. With regulations evolving, ISO/IEC 42001 serves as a cornerstone, ensuring organizations foster trust, innovation and compliance in a legally and socially responsible manner.

Reto P. Grubenmann

Director, Head of Certification & Attestation

KPMG Switzerland


ISOIEC-42001-certification

ISO/IEC 42001 Certification: The Global Standard for AI Management Systems (AIMS)

How to establish trust, transparency and control in artificial intelligence governance.

Why certification matters

Why certification matters

ISO/IEC 42001 certification helps organizations:

  • Build transparent, trustworthy and ethical AI systems
  • Meet compliance obligations such as the EU AI Act
  • Improve risk management and accountability
  • Increase customer and stakeholder confidence
  • Align AI governance with strategic business goals
  • Demonstrate leadership in ethical AI

Accredited certification verifies that a company’s AIMS meets international standards, providing long-term strategic and operational value.

stamp

Key components & requirements of ISO/IEC 42001

ISO/IEC 42001 introduces a comprehensive AI framework tailored for organizations developing, deploying, or managing AI systems. The key requirements include:

  • Establishing an AI management system (AIMS)

    A structured framework for governing AI projects, AI models and data governance practices.

  • AI risk management

    Identification, assessment and mitigation of risks associated with AI, including bias, accountability and data protection.

  • Ethical AI principles

    Encouraging transparency, fairness and accountability in AI development and deployment.

  • Continuous monitoring & improvement

    A process for reviewing AI performance and refining AI governance strategies.

  • Stakeholder engagement

    Promoting responsible AI by involving compliance teams, AI developers and risk management professionals in decision-making processes.


These requirements make ISO 42001 an essential AI certification for companies committed to building trust in AI systems.

Operational model: the plan-do-check-act approach

ISO/IEC 42001 follows a structured plan-do-check-act (PDCA) approach. This method helps organizations monitor AI systems, make improvements and adapt to new challenges.

      • Plan

        Define the scope of the AI management system, identify applicable controls and evaluate risks and ethical implications.

      • Do

        Implement AI governance policies, ensuring responsible practices like fairness, explainability and data transparency.

      • Check

        Regularly monitor AI performance to ensure compliance with evolving regulations.

      • Act

        Continuously improve AI governance based on performance outcomes and regulatory developments.
         

      How ISO/IEC 42001 solves real-world AI challenges

      Adopting AI technology requires balancing innovation with governance. Organizations must address challenges such as:

      • Bias & explainability: Ensuring AI systems are fair and transparent to avoid unintended discrimination.
      • Security & intellectual property: Maintaining transparency while protecting proprietary AI models.
      • Third-party AI systems: Managing compliance risks when using external AI solutions.

      ISO/IEC 42001 provides solutions through:

      Ethics &
      transparency


      The standard ensures AI systems are explainable, auditable and free from bias. AI system impact assessments help identify risks before deployment, making it easier for organizations to justify AI-driven decisions to regulators, customers and stakeholders.

      Continuous learning & adaptability


      AI models continuously evolve, which can lead to unintended consequences. Regular monitoring, audits and performance checks ensure organizations can detect and address issues early, maintaining compliance and reliability.

      Seamless integration with existing standards


      Companies already following ISO/IEC 27001 and 27701 can integrate AI governance into their cybersecurity and data privacy frameworks. This alignment ensures a unified and efficient compliance strategy.

      Third-party management


      Many companies rely on external AI solutions, introducing additional risk. Vendor assessments, contractual safeguards and independent audits help ensure compliance. These measures guarantee that third-party AI tools meet the same rigorous ethical and operational standards as in-house solutions.

      The certification process

      Achieving ISO/IEC 42001 certification involves several audit phases:

        • Scope Definition

          Identify AI systems, services, sites and relevant legal contexts.

        • Risk Assessment

          Evaluate technical, ethical and legal risks and define mitigation controls.

        • Documentation Review

          Assess internal controls and alignment with ISO 42001.

        • Operational Audit

          Confirm implementation, stakeholder roles and system effectiveness.

        • Post-Audit Measures

          Apply and verify corrective actions as needed.

        • Certification Issuance

          Certification is valid for three years, with annual surveillance and a re-certification in year three.

        This process provides a high degree of assurance for AI governance maturity.

          How ISO/IEC 42001 aligns with other standards

          ISO/IEC 42001 integrates with existing security and compliance frameworks, including:

          • ISO/IEC 27001
            Information Security Management
          • ISO/IEC 27701
            Privacy Information Management
          • ISO 21434
            Cybersecurity for AI in Automotive

              Additional supporting standards include:

              • ISO/IEC 23894
                AI-specific risk management
              • ISO/IEC 5259
                Data quality for machine learning
              • ISO/IEC 31000
                General risk management framework
              • ISO/IEC TR 24027
                Bias assessment and mitigation
              • ISO/IEC TR 24368
                Ethical and societal considerations

                  This compatibility enables organizations to build AI compliance strategies on top of existing governance structures.


                  Risk Management:ISO, NIST & OECD

                  OECD Framework: Considers AI risks across socio-technical dimensions such as data, human involvement, deployment context and societal impact.

                  NIST AI Risk Management Framework (RMF): Focuses on lifecycle stages including data collection, model building, validation and secure deployment.

                  These frameworks align with ISO/IEC 42001 to ensure AI systems are both technically sound and ethically aligned.
                   

                  Model validation & Trusted AI

                  Robust model validation is essential for trustworthy AI. KPMG applies white-box, grey-box and black-box testing to:

                  • Assess performance and fairness
                  • Detect bias and blind spots
                  • Ensure explainability and robustness

                  This validation is part of KPMG’s Trusted AI Framework, which addresses ethical alignment, oversight and compliance.

                  How KPMG Switzerland can support your AI compliance journey

                  Navigating the complexities of AI governance requires expert guidance. KPMG Switzerland offers comprehensive services to help organizations:

                      • Assess AI risks and compliance readiness under ISO/IEC 42001.
                      • Develop tailored AI governance strategies aligned with ISO AI standards and the EU AI Act.
                      • Implement AI management systems to streamline AI certification and regulatory adherence.
                      • Support AI projects with responsible AI practices that foster trust and transparency.

                      KPMG’s AIMS Services

                      To support organizations in building responsible and certifiable AI governance, KPMG offers a comprehensive suite of AIMS-related services. These services are tailored to each organization’s level of maturity and regulatory requirements and span the full lifecycle of AI management, from design to deployment and monitoring.

                          Core Services Include:

                          • Pre-Audit on AIMS – Assess readiness for certification and identify governance gaps.
                          • AI Risk Management – Evaluate risks such as bias or performance failure across the AI lifecycle.
                          • AI Impact Assessment – Align AI systems with societal, environmental and business objectives.
                          • Data Governance & Protection – Ensure data privacy, integrity and transparency.
                          • Regulatory Compliance – Address both Swiss, EU AI Act and global legal standards.
                          • Ethical Oversight – Build frameworks for fairness, accountability and trust.
                          • Performance Monitoring – Continuously validate and optimize AI system behavior.
                          • AI Security – Safeguard AI infrastructure and mitigate cyber threats.
                          • Control Objectives – Define governance metrics and assurance controls.

                          iso-certification-circle > Click on the image to enlarge it

                          KPMG’s tailored AIMS services help organizations not only meet ISO/IEC 42001 certification requirements but also embed lasting trust, transparency and resilience in AI systems. Whether preparing for certification or managing operational risk, these offerings provide the tools needed to lead in responsible AI.

                              Why businesses should act now

                              With global AI regulations expanding, implementing ISO/IEC 42001 is a proactive step towards compliance and risk mitigation. Companies that prioritize responsible AI practices today gain a competitive edge, foster trust and prepare for future legal requirements.

                              Benefits of adopting ISO/IEC 42001 include:

                              • Regulatory readiness: Aligns businesses with the EU AI Act and other global frameworks.
                              • Enhanced AI security: Strengthens protection against cyber threats and misuse.
                              • Competitive differentiation: Demonstrates leadership in ethical AI and fostering trust in AI-driven solutions.

                                  Conclusion: strengthening AI governance in Switzerland

                                  As AI continues to reshape industries, businesses must adopt structured governance frameworks to mitigate risks, ensure responsible AI and maintain regulatory compliance. ISO/IEC 42001 provides a crucial foundation for AI management systems, ensuring ethical and transparent AI development.

                                  Proactively addressing AI compliance and aligning with ISO AI standards help Swiss companies build trust. These efforts also enhance operational resilience and establish them as leaders in responsible AI innovation.

                                  Investing in AI governance today helps organizations stay ahead of regulatory changes while fostering a culture of accountability, transparency and long-term success.

                                      Strengthening AI governance with KPMG

                                      Is your organization prepared to navigate the evolving landscape of AI governance? 

                                      ISO/IEC 42001 provides the foundation for responsible AI, but successful implementation requires expert guidance. At KPMG Switzerland, we specialize in helping organizations integrate AI governance frameworks that align with regulatory requirements and business goals.

                                      Explore how ISO/IEC 42001 can benefit your organization by consulting with our Digital Trust & Technology Protection experts.

                                      ISOIEC-42001-certification

                                      ISO/IEC 42001 Certification: The Global Standard for AI Management Systems (AIMS)

                                      How to establish trust, transparency and control in artificial intelligence governance.

                                      Meet our experts

                                      Reto P. Grubenmann

                                      Director, Head of Certification & Attestation

                                      KPMG Switzerland


                                      Related articles and more information

                                      EU AI Act

                                      Everything you need to know about the EU AI Act, how it affects businesses, risk-based frameworks and how to comply.

                                      Ensuring compliance when using AI-based tools

                                      Ensuring responsible AI use: key steps for companies to navigate the evolving AI landscape and avoid potential risks.

                                      How AI influences cybersecurity

                                      Discover how AI and ML are revolutionizing the cybersecurity industry and learn how these tools can be used to detect and respond to security threats.

                                      How hackers find your vulnerabilities hidden in plain sight

                                      Learn how OSINT is used by hackers to find of your organization’s vulnerabilities