Skip to navigation

      Artificial Intelligence (AI) is no longer a future concept – it’s a present reality, reshaping how businesses operate, make decisions and deliver value.

      From optimizing product portfolios, financial processes to unlocking insights from complex data sets, AI is rapidly becoming a core driver of innovation and competitive advantage across industries. 

      However, as AI becomes embedded in products and business-critical processes and decision-making, organizations must move beyond experimentation and focus on reliability and trust. The rapid deployment of AI applications has exposed organizations to new forms of operational, ethical and regulatory risk. As a result, key priorities include combating AI bias, improving model transparency and complying with regulations and standards such as the EU AI Act or ISO 42001:2023. 

      To stay resilient and competitive in the long term, organizations must ensure that AI‑enabled solutions/products and processes are reliable, controllable and auditable. Without strong AI risk management and assurance, the benefits of AI can quickly be undermined by loss of trust, regulatory challenges and reputational damage. 

      Stefan Wälti

      Partner, Head of Assurance Technology

      KPMG Switzerland

      From AI adoption to trusted decision‑making

      AI’s advantage and transformative power lie in its versatility. From AI-powered products over improving operational efficiency to generating insights through the analysis of large datasets. At the same time, AI solutions and their associated risks evolve in parallel. Unintended bias, lack of transparency and regulatory non-compliance are just a few challenges organizations must address. 

      AI governance has therefore become essential. With regulations such as the EU AI Act setting the direction for AI governance in the EU, and parallel developments in Switzerland around AI regulation and digital responsibility, organizations must act now to ensure alignment. 

      In Switzerland, a forward-looking approach to AI regulation is emerging, with a focus on AI ethics in auditing, accountability and cross-sector AI governance. Companies must navigate both Swiss legal frameworks and broader EU regulations to remain compliant

      Current state: The reality of AI implementation 

      More than 75% of businesses actively leverage some form of AI. However, almost
      none have a robust governance framework in use. 

      Regulatory landscapes such as the EU AI Act are setting stringent standards, but compliance remains inconsistent. 
       

      Most organizations lack clarity on managing AI risks, especially those related to ethical use, data privacy and accountability. 
       

      AI assurance in Switzerland is still at an early stage, but leading organizations are beginning to adopt AI management systems (AIMS) that combine regulatory compliance, technical validation and ethical guidelines. In addition, some companies are starting to issue attestation reports over their AI solutions to demonstrate to their stakeholders that they have the right processes and controls in place. 

      Businesses must approach AI adoption with a dual focus – leveraging its capabilities while embedding assurance practices. Without clear assurance structures, many organizations struggle to rely on AI for business‑critical decisions and long‑term resilience. 

      Why AI Assurance is a strategic priority

      What is Assurance?

      AI assurance is the structured process of evaluating, monitoring, and communicating the reliability and regulatory alignment of AI solutions. It's not only about AI audit - it's about building trustworthy AI by aligning with ethical and legal expectations.

      Key benefits of AI Assurance:

      • Governance

        Ensuring AI operates within clear ethical and operational frameworks, with defined ownership, accountability and transparency for AI‑supported decisions. 

      • Compliance

        Meeting international regulations and standards (e.g. EU AI Act and ISO/IEC 42001:2023) while reducing uncertainty in an evolving regulatory environment. 

      • Risk Management

        Identifying and addressing AI‑related risks early across the entire lifecycle, helping organizations prevent issues rather than reacting to failures or incidents. 

      • Trust

        Transparent and auditable AI processes build confidence among management, regulators and other stakeholders. This enables responsible scaling of AI and supports long‑term resilience.

      Together, these benefits make AI assurance a foundation for resilient decision‑making and long‑term competitiveness. 

      Challenges in achieving AI Assurance

      Organizations across industries are increasingly embedding AI into their operations and products. Nearly 75% of businesses report actively using AI in areas like finance, IT and customer products, with this number expected to rise sharply in the coming years. As AI moves from isolated use cases into value pools and core business processes, assurance practices have not evolved at the same pace. 

      Key challenges include: 

      • Complex standards

        Navigating global frameworks such as ISO/IEC 42001 and emerging AI regulations.

      • Technical limitations

        Limited explainability and insufficient mechanisms for detecting and mitigating AI bias.

      • Cultural resistance

        Difficulty prioritizing responsible AI practices over speed or cost pressures.

      • Data quality and bias

        Training AI on unbiased, representative data remains challenging but essential.

      • Regulatory compliance

        Many AI models function as “black boxes,” complicating transparency, accountability and fairness.

      • Explainability

        Keeping pace with evolving regulations is a major and ongoing challenge. 

      The absence of robust governance frameworks leaves many organizations exposed to risks ranging from non-compliance to reputational damage. More importantly, it limits their ability to rely on AI with confidence when making business‑critical decisions. 

      Bridging the gap: Where we are and where we need to be

      Bridging this gap is essential to building trusted and resilient AI over time. Effective AI assurance requires targeted action to address existing shortcomings.

      The following overview summarizes the current state of practice and outlines recommended approaches to help organizations move toward responsible and compliant AI deployment: 

      1. Risk identification


      Current state: AI-specific risks such as bias and model drift are often overlooked. 

      Recommended approach: Organizations should apply comprehensive frameworks – such as KPMG’s Trusted AI framework – to identify and manage technical, operational, and reputational risks associated with AI solutions. 

      3. Stakeholder engagement


      Current state: AI governance activities are frequently siloed, with limited collaboration across functions. 

      Recommended approach: Foster cross-functional cooperation among data scientists, risk and IT specialists, legal teams and executive leadership to ensure aligned and responsible AI governance. 

      2. Regulatory compliance


      Current state:
      Compliance efforts are often reactive and inconsistent, with significant variation across jurisdictions. 

      Recommended approach: Proactively align AI solutions with relevant global or local regulatory requirements from the outset, covering both development and deployment phases. 

      4. Ongoing monitoring


      Current state: Many organizations focus primarily on pre-deployment controls, with limited continuous oversight once AI systems are live. 

      Recommended approach: Implement robust, continuous monitoring mechanisms to ensure fairness, transparency and effectiveness throughout the entire AI lifecycle. 

      AI assurance in business-critical platforms

      AI is increasingly embedded in Software-as-a-Service (SaaS) products and business‑critical platforms such as ERP systems. These products and platforms support customer needs, core processes, including financial reporting, controlling, planning and operational decision‑making.

      As AI becomes part of these environments, the requirements for trust, control and transparency increase significantly. Errors, bias or limited explainability can directly affect financial results, regulatory compliance and management decisions.

      AI assurance helps organizations ensure that AI‑enabled products, ERP processes remain reliable, auditable and aligned with governance, regulatory and customer expectations. By integrating assurance into existing control environments, organizations strengthen resilience where it matters most – at the core of their businesses and operations. 

      KPMG’s proven approach to AI Assurance 

      Addressing the gaps in AI assurance requires targeted and coordinated actions. To overcome these challenges, KPMG offers a proven AI assurance methodology.

      The approach focuses on embedding AI assurance into existing governance, risk and control environments, rather than treating it as a standalone initiative.

      The path to AI assurance involves the following steps:

      Path to AI assurance

      • Inventory and risk categorization

        Map all AI models, and categorize them by impact, regulatory exposure and AI model risk. 

      • Establish governance frameworks

        Establish ethical, regulatory-aligned AI governance based on international standards. 

      • Conduct gap analyses

        Evaluate solution against relevant benchmarks (e.g. ISO/IEC 42001:2023) to identify weaknesses in fairness, transparency and data protection. 

      • Automated monitoring

        Implement real-time controls to monitor model accuracy, detect drift and identify AI transparency or explainability gaps. 


      • Data privacy and security

        Ensure all training and operational data complies with data protection requirements and minimize vulnerability to misuse. 

      • Continuous oversight

        Maintain up-to-date AI solution inventories and monitor compliance throughout the entire AI lifecycle. 

      • Employee AI training

        Build internal capacity in AI compliance, ethics and AI risk management across relevant functions.  

      • Independent AI Audits

        Engage external auditors to perform independent assessments and demonstrate regulatory alignment to stakeholders. 

      How do organizations embed responsible AI in everyday decisions?

      AI assurance goes beyond compliance. It helps embed clear roles, consistent processes and accountability into how AI is used on a daily basis.

      Practical measures include: 

      • Investing in talent

        Building skills in AI governance, ethics and risk management across relevant teams. 

      • Transparent communication

        Clearly explaining how AI solutions are governed, monitored and assured. 

      • Collaboration

        Ensuring close cooperation between business, technology, risk and compliance functions. 

      Build confidence in AI to support better decisions

      AI assurance enables organizations to rely on AI when it matters most. It provides confidence in AI-enabled products, processes and even in complex and highly regulated environments.

      By strengthening trust and control, AI assurance supports long‑term resilience and helps organizations remain competitive as AI becomes embedded in core business processes and customer products.

      Make your AI trustworthy and resilient

      We help organizations assess where AI assurance is needed, strengthen governance and embed AI assurance into existing control environments.  

      KPMG is among the first organizations globally to achieve ISO/IEC 42001 certification for AI Management Systems, reinforcing our role in supporting trusted and resilient AI adoption. 

      Meet our expert

      Stefan Wälti

      Partner, Head of Assurance Technology

      KPMG Switzerland

      Related articles and more information

      Successful digital implementations require aligning innovation with IT and AI compliance to manage rising risks and regulatory change.

      Enhance your relevance and competitiveness with our expertise in reliable and future-proof SOC reporting