The landscape of laws and regulations surrounding Artificial Intelligence is evolving rapidly and posing challenges to organisations. Part of that challenge involves efficiency in complying with a plethora of regulations, but at least as important is responding adequately to societal expectations.
The Artificial Intelligence Act
The legal departments of organisations are seeing fundamental questions land on their desks as a result of the rise of AI and the associated laws and regulations. Within the EU, recent years have been marked by a number of developments including vigorous work on the AI Act. More generally, following the advent of GDPR – aimed at protecting privacy – there has been a set of laws and regulations developed based on more perspectives other than privacy. These include promoting or facilitating innovation, improving consumer protection, or enabling markets to function better.
Lawyers have a reputation for focusing on the risks and problems of innovations and thus being an inhibiting factor. This is also the case with the emergence of AI within organisations and the legal considerations involved. A thorough look at the risks of AI is a hard condition for deploying this technology with confidence. After all, the stakes are high: the risks in the areas of privacy and information security, among others, are evident and are sometimes completely new in nature. Here and there, compliance with laws and regulations is unknown territory. What is certain is that AI tools can access a multitude of data and knowledge sources and that deep learning poses risks.
Nevertheless, in a time of rapid innovation, organisations do not want to be left behind and are eager to explore the opportunities of new technology. In many cases, sandbox structures can be an effective way to shape innovation responsibly. A sandbox provides a safe experimentation space in which ideas can be tested without the risk that failures will have a major impact on mainstream operations or the marketplace.
Holistic understanding of AI projects
The rise of AI has far-reaching consequences in a multitude of areas. Cooperation between different specialists – including lawyers, ethicists, technologists and business strategists – is therefore more necessary than ever before. For example, lawyers have knowledge of the law, technologists understand the technical possibilities and limitations of AI, and people from the business understand the dynamics of the market and their own organisation. Working together, these professionals can develop a holistic understanding of both the technical and regulatory aspects of AI projects.
This is not just about properly incorporating and complying with laws and regulations. It is also about societal expectations that may differ from what the law stipulates. AI technologies can deeply affect the privacy, security and autonomy of individuals. Compliance with laws and regulations is not yet a guarantee that society's expectations have been met. Organisations must therefore have good social acumen and develop ethical guidelines that may go beyond what is required by law.
Achieving efficient compliance
Another challenge is to ensure efficient compliance. Laws and regulations often overlap but have slightly different definitions or frameworks. With a framework developed by KPMG, it is possible to avoid unwanted stacking of procedures or controls. The principle behind this proven framework is simple: test once, apply many.
In short, the rise of AI is a great opportunity, but it also leads to complex regulatory and ethical challenges. Through efficient procedures, proper monitoring of societal expectations and multidisciplinary collaboration, organisations can effectively address these challenges. Our specialists will be happy to help you with this.