Welcome to KPMG’s first SSM Insights Newsletter of 2024. This year will see the SSM celebrate its 10th anniversary. It was in November 2014 that the ECB took over direct supervision of the Euro area’s significant institutions, marking the establishment of the first pillar of the banking union.
May 2024
A new framework
On 13 March 2024, the European Parliament formally approved the EU AI Act, the first comprehensive artificial intelligence (AI) legislation passed by any major jurisdiction in the world.
As a recent KPMG paper outlined, the AI Act’s focus is on protecting safety and fundamental rights. It introduces a tiered system of regulatory requirements for different AI applications, based on their level of risk. While many AI systems will be left essentially unregulated, those considered high risk will be subject to stringent safeguards — and those deemed contrary to European values will be largely prohibited.
For banks, the most significant element of the Act is the designation of AI credit scoring systems as high risk, on account of the potential for unfair discrimination against individuals or groups. (An analogous provision classes AI systems for pricing health or life insurance policies as high risk too.) Such AI systems must meet high standards of robustness and accuracy, must operate within a strong risk management framework, and must be designed to ensure human oversight and proper understanding of their outputs. These requirements will apply to new systems deployed from two years after the AI Act takes effect.
AI supervision and compliance
The AI Act recognises that banks and their credit models are already heavily regulated. So banks can satisfy many of its obligations by complying with existing regulatory requirements on model risk management and governance.
Supervision, however, may be complicated by the multi-faceted institutional architecture the AI Act establishes. For most industries, AI oversight (including checking that providers of high-risk AI systems have obtained the necessary safety certification before deployment) will be the responsibility of new national AI authorities. However, for financial services firms, European Union (EU) countries can allocate this task either to their AI agency or to existing national financial supervisors. Meanwhile the European Central Bank (ECB), supervisor of Europe’s significant institutions, has no role in supervising AI Act requirements — but will continue to scrutinise credit models from a prudential perspective.
This complex regulatory architecture raises the possibility that banks using AI-powered credit models will find the same models being supervised by multiple national and European bodies, with potentially very different cultures and core expertise. The various authorities involved should therefore coordinate their activities effectively to avoid imposing duplicative or even contradictory requirements on firms, and to ensure a consistent supervisory approach to AI across Europe.
Challenging the model for model risk
The advent of AI may also require a more general shift in approaches to model risk management and supervision. Hitherto, the primary focus has typically been on ensuring model soundness ex-ante, via careful model design, validation and backtesting. Once risk controllers and supervisors have been satisfied with a model’s robustness, first line staff have generally been able to rely heavily on its outputs in making decisions such as loan approvals.
AI models’ much greater complexity, and capacity for self-engineering, however may challenge this approach. A system that employs vast datasets and chooses its own parameters may be much more difficult to validate, while the value of ex-ante approval will decline as a model learns and adjusts the statistical relationships it uses to produce its outputs. These features of AI may require greater emphasis on ex-post risk management, to ensure banks can properly interpret — and where necessary, challenge — their AI systems’ outputs before using them to make business decisions.
The AI Act recognises this dynamic, in its requirements for high-risk models to be designed and documented so as to allow their outputs to be properly interpreted by users, and in the ‘AI literacy’ requirement for firms to ensure their staff have sufficient expertise to use AI systems appropriately. At the supervisory level, Chair of the supervisory board of the ECB Claudia Buch similarly said in a recent interview that the ECB expects banks to demonstrate that they do not just ‘blindly’ follow AI systems’ recommendations when making decisions.
Implications for banks
The adoption of the AI Act is a key milestone in Europe’s adoption of a technology that will have a profound — perhaps revolutionary — impact on our economy and society. The AI Act sets the ground rules for the development and deployment of AI solutions to deliver higher quality services and greater efficiency, to the benefit of customers across industries, including in financial services. Now that the Act is in place, banks should take three key steps to prepare for the AI future: