The transformative potential of artificial intelligence (AI) and automation is currently being intensively discussed and utilised. AI offers society and companies new perspectives and many advantages, for example to completely reshape jobs and key industries.
However, amidst the global spread of AI in business and everyday life, concerns are also emerging about its ethical use and risks. In the global study "Trust in Artificial Intelligence", three out of five respondents express reservations about AI systems, and 71 per cent expect regulatory measures.
In response, the European Union (EU) has made significant progress with a provisional agreement on the ground-breaking "Artificial Intelligence Act" (AI Act), which will set a new global standard for AI regulation. The Act came into force in March 2024, meaning that most AI systems will have to comply with the Act's requirements by 2026. The AI Act takes a risk-based approach to protect fundamental rights, democracy, the rule of law and environmental sustainability.
The EU's AI Act aims to strike a balance to encourage the adoption of AI while safeguarding the rights of individuals to use AI responsibly, ethically and trustworthily.