AI continues to garner much-deserved attention from business leaders for its transformational potential. However, all this enthusiasm has also brought a persistent concern: How will AI be regulated?
This ongoing uncertainty has been identified as the top barrier to AI adoption in recent KPMG surveys. With the passage of the European Union (EU) Parliament’s Artificial Intelligence Act (AI Act), many organisations will now start to gain clarity as they navigate the specifics of this first-of-its-kind legislation. It was published in the Official Journal of the European Union on 12 July 2024 and entered into force on 1 August 2024. Get an overview of everything you need to know here.
The idea behind the AI Act is that the higher the risk of an AI system, the more stringent are the associated requirements and obligations. AI regulation is intended to increase user confidence in AI within the EU and thereby create better conditions for innovation for manufacturers and users of AI applications. The EU AI Act casts a wide net, affecting any organisation that uses AI technology as part of products or services delivered in the EU.
Many aspects of the EU AI Act will be challenging for organisations to implement and address, particularly in terms of technical documentation for the testing, transparency, and explanation of AI applications. On top of this it is also essential that organisations crack the code on how to bridge the gap between the legal aspects and the practical aspects of AI use. We have listed eight concrete steps for your organisation to get started here.
Violations of the AI Act can result in fines of up to 30 million euros or up to six percent of total annual global sales for the previous fiscal year, making the sanctions comparable to those of the GDPR.
The AI Act’s formal approval starts the clock on a series of regulations that will roll out over two years. The new law includes a specific definition of AI, tiered risk levels, detailed consumer protections, and much more.