Evolving plans for AI regulation

Reconciling frameworks with the competitiveness agenda

April 2025

The EU’s prescriptive AI Act has entered into force, with rules for generative AI applying from August 2025. The UK is instead proceeding with its flexible, principles-based approach, requiring no new regulatory frameworks in the short term. However, in the face of geopolitical developments, both jurisdictions are simultaneously reconsidering their overall balance between competitiveness and regulatory safeguards.

UK developments

The BoE/FCA’s latest AI survey has shown that 75% of firms are already using some form of AI in their operations – up from 53% in 2022. This increase is not being driven solely by back-office efficiency gains but also includes significant use cases such as credit risk assessments (an activity designated as ‘high risk’ under the EU AI Act), algorithmic trading and capital management.

To investigate this trend, the Treasury Committee has launched a Call for Evidence on the impacts of the increased use of AI in banking, pensions and other areas of financial services. And the Bank of England’s Financial Policy Committee has published an assessment of AI’s impact on financial stability.

Nonetheless, for now, the UK government is continuing with its principles-based approach – see more in our previous articles here, here and here. The BoE/PRA and FCA have determined that their existing toolkits remain appropriate to address the risks posed by AI as these risks are ‘not unique’. Being predominantly outcomes-focused, these toolkits provide sufficient agility to adjust to unforeseen technology and market changes.

This approach is being reinforced by the wider push for growth and competitiveness (see more in our article here). With AI seen as a key “engine for growth”, the government’s focus is pivoting away from safeguards and towards innovation (while still, of course, accounting for national security).

For example, at the Paris AI Action Summit in February, the UK and US were the only two countries to opt-out of signing the non-binding international declaration on ‘inclusive and sustainable’ AI.

The government has renamed its AI Safety Institute to become the AI Security Institute. The statement announcing this renaming promised that the rebranded institute “will not focus on bias or freedom of speech,” but instead will prioritise unleashing economic growth.

The FCA continues to encourage innovators and wider stakeholders to engage with its AI Lab initiatives (including its recent AI Sprint).

And the government has issued a response to the AI Opportunities Action Plan, endorsing almost all of the original 50 recommendations – including setting up AI “growth zones”, creating a “sovereign AI unit” and requiring regulators to publicly report annually on their activities to promote AI innovation. The response also confirmed that there will be a consultation on legislation to protect against risks associated with the “next generation of the most powerful models”.

EU developments

In the EU, the AI Act has entered into force with rules for generative models applying from August 2025. The Act classifies AI applications into risk levels, introducing stringent requirements for the most high-risk areas.

This is being complemented by additional lower-level FS guidance from the European Supervisory Authorities (ESAs), demonstrating how existing sectorial legislation should be interpreted in the context of AI (e.g., statement from ESMA on ensuring compliance with MiFID II, consultation from EIOPA on AI risk management).

Member States have until August 2025 to designate the national competent authorities that will oversee application of the Act’s rules within their jurisdiction and carry out surveillance. As the capitals are expected to select different types of authority (e.g., data protection, telecommunications, bespoke AI bodies), certain challenges are expected. In particular, these authorities will need to ensure they find a ‘common language’ and avoid regulatory fragmentation.

In advance of full applicability of the Act (from August 2026), some Member States (e.g., Spain, Italy) have chosen to implement their own national rulebooks. These rulebooks will need to be monitored and updated to reflect any future amendments to the Act.

Being the first AI law by a major jurisdiction, the AI Act represented a major regulatory milestone. However, its prescriptive nature arguably reduces its ability to remain agile in the face of fast-moving technology. In fact, during negotiations, the underlying risk classification system had to be amended to account for the emergence of general-purpose AI. And as with the UK, the EU is also having to reconcile its approach with increasing calls for international competitiveness.

The AI Code of Practice (COP) for General Purpose AI complements the Act and aims to provide more detailed guidance for companies to ensure they adhere to ethical standards, even for systems that aren’t considered high-risk. If successful, the EC can give the COP a legal standing for providers to self-certify conformity with as part of their compliance. Due to be completed in May, the COP has already been significantly watered-down (or “simplified”) during drafting – mostly in response to industry concerns and to align with the simplification agenda. The third draft has introduced much greater flexibility on copyright requirements, along with further adjustments to high-risk safety and security measures, and transparency reporting obligations.

The EU has also launched a €200 billion InvestAI strategy, signalling an increasing reliance on private capital to fuel growth in this area. These sources of private funding could also push policymakers to reduce the compliance burden.

International developments

Although individual jurisdictions – like the UK and EU – are being influenced by the drive for competitiveness, the international standard setters continue to focus on risks (and mitigants).

The IMF has flagged concerns around herding and concentration risk in capital markets, especially if trading strategies become largely derived from open-source models. As a result, they urge national regulators to provide guidance on model risk management and emphasise stress testing.

IOSCO has echoed these concerns, identifying most-commonly cited AI risks to include concentration, third-party dependencies and data considerations. These risks become more concerning when paired with the trend of increasing use of AI to support decision-making.

What this means for firms

Despite any movement towards regulatory “simplification”, firms still need to ensure their risk and control frameworks properly account for AI use.

Those firms with a footprint in the EU must begin navigating the AI Act as a baseline – either building new or uplifting existing governance and control frameworks.

UK firms currently do not have a prescriptive rulebook to comply with. However, in many ways, their task is more difficult, as the onus is left on them to figure out how to manage this rapidly changing technology.

How KPMG in the UK can help

KPMG in the UK has experience of advising businesses on integrating new technology into their operations, including developing AI integration and adoption plans. Our technology teams can provide expertise and build out test cases, while our risk and legal teams can support with designing and implementing control frameworks. 

If you have any questions or would like to discuss any matters concerning AI, please get in touch.

Our insights

Regulatory Insights

Providing pragmatic and insightful intelligence on regulatory developments.

Digital Finance

The digitalisation of the financial sector continues.

Our people

Kate Dawson

Wholesale Conduct & Capital Markets, EMA FS Regulatory Insight Centre

KPMG in the UK

Bronwyn Allan

Manager, Regulatory Insight Center

KPMG-UK


Connect with us

KPMG combines our multi-disciplinary approach with deep, practical industry knowledge to help clients meet challenges and respond to opportunities. Connect with our team to start the conversation.

Two colleagues having a chat