KPMG AG Wirtschaftsprüfungsgesellschaft is expanding its Trusted AI offering with new audit and advisory services for artificial intelligence (AI). The aim is to support companies in using AI systems in a reliable, legally compliant and responsible manner.
The focus of the new offering is:
- AI Assurance: KPMG checks whether AI systems make comprehensible and fair decisions, responsibilities are clearly defined and suitable control mechanisms are in place to minimize bias and comply with ethical standards. The audits include model validations, bias analyses and the traceability of decision paths using established explainability tools. In addition to its own Trusted AI Framework, KPMG is also guided by internationally recognized standards such as ISO/IEC 42001 and national standards such as IDW PS 861 for auditing artificial intelligence.
- AI governance: In addition to the technical audit, KPMG supports companies in the strategic and organizational alignment - from the development and implementation to the ongoing monitoring of AI systems. The focus is on responsible design and an effective governance structure integrated across departments. KPMG helps organizations to manage AI as a controlled corporate asset - with clear responsibilities, approval processes and a resilient control architecture along the entire AI lifecycle. The central element is the KPMG Trusted AI Framework, which combines regulatory, ethical and operational requirements and creates transparency regarding roles, responsibilities and control mechanisms - for example through defined approval processes, bias reviews and model fact sheets that document training data and decision-making logic. For operational implementation, governance and control processes are mapped in common platforms or in KPMG's own solutions such as the KPMG AI Cockpit. The cockpit simplifies review and approval processes, makes responsibilities transparent and maintains a central inventory of all AI systems. In this way, companies gain efficiency through automated workflows and real-time reporting. AI governance thus becomes an integral part of corporate and risk management - and creates the basis for shaping innovation responsibly.
- Gap and readiness assessments: Existing AI systems are examined for potential vulnerabilities, from security gaps to possible bias risks. Established frameworks are used, supplemented by fairness checks and robustness tests to ensure broad practical suitability. KPMG combines technical testing procedures with governance assessments to holistically evaluate the maturity level of AI systems. The focus is on data quality, model robustness, traceability of decisions and compliance with the requirements of the EU AI Act. The assessment uncovers both technical and organizational gaps - such as insufficient documentation, a lack of human oversight or deficits in risk and incident management. The results are visualized in the KPMG AI Cockpit, for example, and enable prioritized action planning with clear responsibilities and deadlines.