• 1000

Generative AI is now firmly established in German companies: Many have already established AI strategies, are planning proof of concepts or are implementing initial use cases, as our study "Generative AI in the German economy" shows. Nevertheless, there is often a lack of centralised AI governance structures - embedded in a uniform corporate strategy.

The benefits of effective AI governance are clear: it enables fast decision-making and operational agility without compromising on compliance requirements. This is crucial to ensure that the introduction of organisational structures or units does not lead to unnecessary bureaucracy. Many companies have already recognised this: We are seeing an increasing demand for AI governance implementations.

The organisational structure is at the heart of AI governance. It defines responsibilities and specific roles and committees, such as the AI Governance Board, which keeps an eye on ethical aspects relating to AI. Parallel to the organisational structure, the operational organisation focuses on developing clear, implementable processes that guide the life cycle of AI systems - from conception to deployment and monitoring.

An effective governance structure also includes the position of Chief AI Officer (CAIO), which is already being used successfully in the USA. Its central function within corporate management is to lead and monitor the strategic direction and implementation of AI initiatives. CAIOs need a deep understanding of the technological aspects as well as the ability to integrate the technologies into the business strategy and maximise their potential for value creation.

Another key step in successful AI governance is to identify the AI interfaces within the company. It is important to understand which departments use AI, how far the respective implementation status is and whether individual use cases are merely collected or actually implemented. This inventory makes it possible to systematically take stock of AI systems, assess their risks and take appropriate measures. These must then be integrated into the existing internal control system (ICS) and compliance management systems (CMS).

In view of advancing digitalisation and the increased use of AI, integrated risk management is also becoming increasingly important. This also involves close collaboration between different departments and functions with the aim of ensuring transparency across end-to-end (E2E) processes. This is crucial in order to recognise potential risks from the use of AI at an early stage and manage them effectively.

In this context, the Executive Board and management also play a decisive role. It is their responsibility to define clear guidelines that ensure the safe and ethical use of AI technologies. They should create specific framework guidelines that enable all departments to develop and implement AI initiatives in line with the company's values and objectives.

For successful AI governance, companies should distinguish between AI products or services that are offered to customers and the internal use of the technology within the company, for example to increase efficiency. This is because both areas of application require a specific approach in order to adequately address the respective risks and ensure effective AI governance.

The EU AI Act came into full force in mid-2024 and forms the decisive regulatory framework for the use of AI for numerous companies. The Institute of German Auditors (IDW) has presented a comprehensive framework for auditing AI systems, which is summarised in IDW PS 861.

Among other things, this auditing standard enables the assessment of the materiality of AI systems used and the identification of potential security gaps. This can strengthen confidence in the use of AI systems. Accordingly, this could serve companies as an additional regulatory framework for the use of AI alongside the EU AI Act.

In order to fulfil the minimum requirements of IDW standard PS 861, companies are obliged to take measures in the following areas:

  • AI governance/AI compliance/AI monitoring
  • data
  • AI algorithm/AI model
  • AI applications
  • IT infrastructure

BaFin also emphasises in its publications that AI applications also entail regulatory challenges. In its position paper on big data and AI (2021), it emphasises the following key aspects:

  • Accountability: Despite automated processes, responsibility for decisions always remains with the supervised companies.
  • Bias and discrimination: Companies must prove that their AI systems do not exhibit any systematic bias or discrimination.
  • IT security: AI systems must be robust against attacks and manipulation.

Accordingly, BaFin expects financial institutions to set up suitable internal control systems to ensure compliance with these requirements.

In addition to BaFin, the State Commissioner for Data Protection and Freedom of Information (LDI) has also repeatedly emphasised the importance of data protection-compliant AI. AI systems may only be fed with data that is processed on a lawful basis. Personal data, for which high requirements are placed on anonymisation and pseudonymisation, is particularly sensitive.

Our recommendation: AI governance framework

AI governance is an essential building block for the responsible use of artificial intelligence, especially in regulated industries such as the financial sector. It is therefore advisable to develop a company-specific AI governance framework. It helps to meet the diverse requirements of the organisation while adhering to all compliance requirements, responsibly managing the complexity of AI deployment and promoting innovation within a governance ecosystem.

Our service

With the Trusted AI Framework, we have developed a better practice approach to overcome the complex challenges of AI governance. It is based on ten basic principles: Accountability, Data Integrity, Explainability, Fairness, Privacy, Reliability, Operational Security, Cybersecurity, Sustainability and Transparency.

Processes have been defined and robust controls implemented for each of these principles. In addition, the processes and controls have been harmonised with regulatory requirements, in particular the EU AI Act and the GDPR, in order to reduce regulatory risks.

* Legal services are provided by KPMG Law Rechtsanwaltsgesellschaft.