Although Artificial General Intelligence (AGI) remains beyond the horizon, existing AI models and solutions - such as Large Language Models (LLMs) and Generative AI - already raise profound questions. Unlike purely rule-based systems, these models can learn from data and exhibit emergent behaviours in their outputs. This complexity makes the AI market uniquely sensitive to regulation compared to previous IT breakthroughs. Defining clear limitations and effective controls thus remains a significant challenge.
The EU AI Act is a step toward responsible AI governance, but its success hinges on implementation agility, regulatory clarity, and support for innovation ecosystems. Bureaucracy is a real risk - but so is under-regulation. The challenge is not choosing between ethics and innovation but designing a dynamic framework that evolves with technology while safeguarding societal values.