“Navigating the EU AI Act: Insights from an Industry Expert.”

Interview with Alberto Job, Director at KPMG Switzerland.

Interview with Alberto Job, Expert in Information Management & Compliance at KPMG Switzerland

Alberto Job

Alberto Job, the EU AI Act is currently a hot topic. Which companies need to make sure they are compliant with this regulation?

The EU AI Act is a legislation by the European Union governing the regulation and deployment of Artificial Intelligence (AI) within the EU and applies to organizations involved in placing or deploying AI systems within the EU market, or where these systems impact EU citizens are. It aims to establish an ethical and legal framework for the development, deployment, and use of AI systems. The Act sets specific requirements, particularly for highrisk AI systems, emphasizing transparency, accountability, and ethical principles in their deployment. Additionally, it underscores the importance of monitoring and oversight of AI systems to minimize their impact on citizens and society. Ultimately, the EU AI Act seeks to build trust in AI technologies and foster innovation.

The AI Act will enter into force 20 days after its publication in the Official Journal which is expected in May/June 2024, and will be fully applicable 2 years later, with some exceptions: prohibitions will take effect after six months, the governance rules and the obligations for general-purpose AI models become applicable after 12 months and the rules for AI systems embedded into regulated products - will apply after 36 months.

How can affected companies navigate the legal requirements of the EU AI Act?

Navigating the legal requirements of the EU AI Act may seem overwhelming at first. While understanding these regulations is essential, translating them into practical application is equally vital. It's not just about grasping the regulations; it's about effectively applying them in practice. Achieving seamless compliance with the EU AI Act is crucial for operational success. This complexity is evident in requirements such as the "accuracy" mandate for "high-risk" AI systems. Therefore, the guidance of technical experts is essential.

What key factors should companies focus on when assessing how the EU AI Act affects their operations?

When considering the implications of the EU AI Act on your operations, the first question to address is understanding the applicable roles outlined in the legislation. The EU AI Act outlines four key roles: Provider/Manufacturer, Importer, Distributor, and Deployer. It's crucial to identify which role applies to your organization. Operating on a risk-based framework, the EU AI Act assigns specific obligations based on different risk classifications. Therefore, the next crucial step is evaluating the risk category of your AI system. Compliance involves fulfilling precise legal obligations corresponding to each category, which are determined by both your role and the classification of the AI system being deployed.

How can the requirements of the EU AI Act be effectively implemented?

To gain a deeper understanding of how these compliance requirements are implemented, it's crucial to take a deeper look on the requirements, for example at the accuracy standards outlined in the EU AI Act. Achieving this demands a systematic approach through model development, metric selection, and ongoing evaluation. In data pre-processing, we prioritize quality to optimize performance, following "garbage in, garbage out" principle. These steps align with the EU AI Act's data governance requirements. In model selection, we rigorously assess algorithms to uncover the best for revealing data patterns, aligning with transparency requirements of the EU AI Act.

How can companies assess the accuracy of their AI model?

In scenarios with imbalanced datasets, traditional accuracy metrics may not adequately reflect an AI model's efficacy, particularly in tasks like fraud detection or rare disease identification. To ensure a comprehensive assessment, it's crucial to select appropriate performance metrics aligned with the application's specific objectives. Metrics span across different categories, including those designed for classification, regression, various other types of supervised learning, unsupervised learning and reinforcement learning. For instance, in classification problems, precision and recall offer deeper insights into model reliability by considering the trade-off between false positives and false negatives, while regression tasks commonly use metrics like mean squared error (MSE) and mean absolute error (MAE) to quantify prediction accuracy.

How can continuous evaluation and adaptation be maintained?

To ensure compliance with the EU AI Act's accuracy requirements, we adopt a disciplined approach to continuous evaluation and adaptation of AI models throughout their lifecycle. This allows us to meet the mandate for consistent performance and the communication of accuracy metrics as specified in Article 15 of the EU AI Act. It involves leveraging automated retraining frameworks, such as Azure Machine Learning, to periodically update models with new data. This ensures their evolution in line with changing data trends, simplifying maintenance and improving model performance. In addition, we prioritize performance monitoring and data drift detection. By continuously monitoring model performance and detecting shifts in data distributions, we can promptly intervene to maintain accuracy. These monitoring results are stored as part of our logs, fulfilling the automatic logging requirements stipulated by the EU AI Act.

To what extent does the EU AI Act influence Swiss regulatory measures?

The Federal Council decided at its meeting on 22 November 2023, that it wants to harness the potential of AI while minimizing the risks it poses to society. It therefore in-structed DETEC to prepare an overview of possible regulatory approaches to AI, which is expected to be available by the end of 2024. As the Federal Council furthermore stated, the analysis will build on existing Swiss law and identify possible regulatory approaches for Switzerland that are compatible with the EU AI Act and the Council of Europe's AI Convention. Both sets of international regulations are relevant for Switzerland; they are expected to be complete by spring 2024 and will contain binding horizontal rules on AI. The analysis will examine the regulatory requirements with a particular focus on compliance with fundamental rights. The technical standards and the financial and institutional implications of the different regulatory approaches will also be considered.

Therefore, Switzerland is not rushing ahead with its own legislation. However, Switzerland is closely monitoring the different developments on the international regulatory, also on the technical front, and is then deciding if present legislation needs to be enhanced or even new laws must be enacted. 

Dominik Weber

Head of External Communications

KPMG Switzerland

Discover more

Technology

Stay ahead of the curve with insights on technology trends, emerging technologies, cloud adoption, and SAP solutions that are shaping industries.