In the EU, the AI Act has entered into force with rules for generative models applying from August 2025. The Act classifies AI applications into risk levels, introducing stringent requirements for the most high-risk areas.
This is being complemented by additional lower-level FS guidance from the European Supervisory Authorities (ESAs), demonstrating how existing sectorial legislation should be interpreted in the context of AI (e.g., statement from ESMA on ensuring compliance with MiFID II, consultation from EIOPA on AI risk management).
Member States have until August 2025 to designate the national competent authorities that will oversee application of the Act’s rules within their jurisdiction and carry out surveillance. As the capitals are expected to select different types of authority (e.g., data protection, telecommunications, bespoke AI bodies), certain challenges are expected. In particular, these authorities will need to ensure they find a ‘common language’ and avoid regulatory fragmentation.
In advance of full applicability of the Act (from August 2026), some Member States (e.g., Spain, Italy) have chosen to implement their own national rulebooks. These rulebooks will need to be monitored and updated to reflect any future amendments to the Act.
Being the first AI law by a major jurisdiction, the AI Act represented a major regulatory milestone. However, its prescriptive nature arguably reduces its ability to remain agile in the face of fast-moving technology. In fact, during negotiations, the underlying risk classification system had to be amended to account for the emergence of general-purpose AI. And as with the UK, the EU is also having to reconcile its approach with increasing calls for international competitiveness.
The AI Code of Practice (COP) for General Purpose AI complements the Act and aims to provide more detailed guidance for companies to ensure they adhere to ethical standards, even for systems that aren’t considered high-risk. If successful, the EC can give the COP a legal standing for providers to self-certify conformity with as part of their compliance. Due to be completed in May, the COP has already been significantly watered-down (or “simplified”) during drafting – mostly in response to industry concerns and to align with the simplification agenda. The third draft has introduced much greater flexibility on copyright requirements, along with further adjustments to high-risk safety and security measures, and transparency reporting obligations.
The EU has also launched a €200 billion InvestAI strategy, signalling an increasing reliance on private capital to fuel growth in this area. These sources of private funding could also push policymakers to reduce the compliance burden.