cancel

Decoding the EU Artificial Intelligence Act

Understanding the AI Act’s impact and how you can respond.


Back to KPMG Trusted AI Services page

Artificial Intelligence (AI) is offering new benefits to society and businesses, aiming to reshape workplaces and key industries. The push to harness the transformative potential of AI and automation is underway. However, amidst the global proliferation of AI in business and daily life, concerns about ethical use and risks emerge. Trust issues persist; in the Trust in artificial intelligence global study, three in five people express wariness about AI systems, leading 71 percent to expect regulatory measures.

In response, the European Union (EU) has made significant strides with a provisional agreement on the groundbreaking Artificial Intelligence Act (AI Act), which is anticipated to set a new global standard for AI regulation. Envisioned to become law in 2024, with most AI systems needing to comply by 2026, the AI Act takes a risk-based approach to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability.

The EU's AI Act aims to strike a delicate balance, fostering AI adoption while upholding individuals' rights to responsible, ethical, and trustworthy AI use. This paper explores the potential impact of the AI Act on organizations, delving into its structure, obligations, compliance timelines, and suggesting an action plan for organizations to consider.


Decoding the EU Artificial Intelligence Act 


Understanding the AI Act’s impact and how you can respond.


AI holds immense promise to expand the horizon of what is achievable and to impact the world for our benefit — but managing AI’s risks and potential known and unknown negative consequences will be critical. The AI Act, which was finalized in 2024, aims to ensure that AI systems are safe, respect fundamental rights, foster AI investment, improve governance, and encourage a harmonized single EU market for AI.

The AI Act's definition of AI is broad and includes various technologies and systems. As a result, organizations will likely be impacted significantly by the AI Act. Most obligations are expected to take effect August 2026. However, prohibited AI systems must be phased out before February 2025. The rules governing general-purpose AI are set to apply in August 2025.1

The AI Act applies a risk-based approach, dividing AI systems into different risk levels: unacceptable, high, limited and minimal risk.2

 

High-risk AI systems are permitted but subject to the most stringent obligations. These obligations will affect not only users but also so-called ‘providers’ of AI systems. The term ‘provider’ in the AI Act covers developing bodies of AI systems, including organizations that develop AI systems for strictly internal use. It is important to know that an organization can be both a user and a provider.

 

Providers will likely need to ensure compliance with strict standards concerning risk management, data quality, transparency, human oversight, and robustness.

 

Users are responsible for operating these AI systems within the AI Act’s legal boundaries and according to the provider's specific instructions. This includes obligations on the intended purpose and use cases, data handling, human oversight and monitoring.

New provisions have been added to address the recent advancements in general-purpose AI (GPAI) models, including large generative AI models.These models can be used for a variety of tasks and can be integrated into a large number of AI systems, including high-risk systems, and are increasingly becoming the basis for many AI systems in the EU. To account for the wide range of tasks AI systems can accomplish and the rapid expansion of their capabilities, it was agreed that GPAI systems, and the models they are based on, may have to adhere to transparency requirements. Additionally, high-impact GPAI models, which possess advanced complexity, capabilities, and performance, will face more stringent obligations. This approach will help mitigate systemic risks that may arise due to these models' widespread use.4

Existing Union laws, for example, on personal data, product safety, consumer protection, social policy, and national labor law and practice, continue to apply, as well as Union sectoral legislative acts relating to product safety. Compliance with the AI Act will not relieve organizations from their pre-existing legal obligations in these areas.

Organizations should take the time to create a map of the AI systems they develop and use and categorize their risk levels as defined in the AI Act. If any of their AI systems fall into the limited, high or unacceptable risk category, they will need to assess the AI Act’s impact on their organization. It is imperative to understand this impact — and how to respond — as soon as possible.


Related Content

Decoding DORA for European banks

Preparing for compliance challenges and the ECB's evolving role

Read more
alt

Revised ECB Guide to internal models

Three key impacts for banks and what to expect going forward

Get in touch


Laurent Gobbi

Global Trusted AI & Tech Risk Leader

KPMG en France

David Rowlands

Global Head of Artificial Intelligence

KPMG International


Transforming for a future of value

KPMG Connected Enterprise

KPMG’s customer centric, agile approach to digital transformation, tailored by sector

 

KPMG Powered Enterprise

Be the competition that others want to beat — with outcome-driven functional transformation made possible by KPMG Powered Enterprise.

KPMG Trusted AI

How to build and sustain the trust of your stakeholders.


 

KPMG Elevate

Unlock financial value quickly and confidently.


 


European Commission. (December 12, 2023). Artificial Intelligence – Questions and Answers.

European Council. (December 9, 2023). Artificial Intelligence Act Trilogue: Press conference – Part 4.

European Parliament. (March 2023). General-purpose artificial intelligence.

European Commission. (December 12, 2023). Artificial Intelligence – Questions and Answers. 


Connect with us

KPMG combines our multi-disciplinary approach with deep, practical industry knowledge to help clients meet challenges and respond to opportunities. Connect with our team to start the conversation.

Two colleagues having a chat