Until now, the maturity of organizations for creating AI applications existed on a continuum. In this blog, we show that the proposed AI Act will install a maturity threshold, dividing the field into two groups of organizations: those that can comply with regulations and can still develop AI systems themselves, and those that cannot. We point out the elements of the AI Act proposal that will most obviously be problematic, given common practice in low-maturity organizations.
What is the Artificial Intelligence Act?
In recent years, Artificial Intelligence (AI) has benefited from colossal investment – reaching the $500 billion mark worldwide in 2023[1] – which has led to major breakthroughs and the development of foundation models like ChatGPT. The potential contribution of AI to the global economy is estimated to reach $15.7 trillion in 2030.[2] The ever-increasing use of AI in diverse areas such as healthcare, financial services, and retail has emphasized the need for controlling potential risks and abuses, leading to the development of AI-specific legislative and regulatory frameworks. With its Artificial Intelligence Act (AI Act), the European Union aims to be a front-runner in this regard.
The proposal for the AI Act[3] defines AI systems as "software that can generate results that influence its environment, and that is created by machine learning, logic, and knowledge-based or statistical approaches." Given this very wide definition (which is still expected to change), the categorization of AI systems according to their risk level is vital, as is the application of the differentiated requirements. It seems that many of the impactful systems will be ‘high risk’ (i.e., representing a risk to the health and safety or fundamental rights of natural persons), which will lead to a slew of requirements. These are the ones we will focus on below.
Pitfalls organizations might struggle to avoid
While there is no final regulation yet, some of the legislator’s intentions are already clear. In this blog, we connect those intentions regarding high-risk AI systems to some of the most common struggles faced by AI teams in practice. Our observations are based on direct experience through KPMG’s practice of auditing, validating, and building AI systems and teams. By connecting the two, we believe it is already possible to identify the operational issues that are mostly likely to cause headaches (or perhaps even fines) under the upcoming legislation.