Every new technology has a bright side and a not-so-bright side, and Artificial Intelligence is no exception. This groundbreaking technology offers many organisations attractive strategic potential, but it also brings with it new issues of reliability and ethics, among others – and consequently, challenges in building trust in the technology.

Issues of reliability and ethics

Without trust, everything comes to a standstill. And digital technology – including AI – can only be successful if there is trust in its operation. This technology has rapidly become a dominant influence on virtually every facet of society. Algorithms are increasingly directing our behaviour and decisions. It therefore also makes perfect sense that societal questions arise about whether they are doing so appropriately.

The stakes are too high for organisations not to take these questions seriously, as AI is a gamechanger that cannot be ignored. The technology offers new options for generating more in-depth insights to improve and/or accelerate decisions. This will deliver value in numerous ways, from improving customer processes to completely transforming business models. It is therefore important that organisations set ambitious goals around AI and look at the new possibilities with an open and objective mind. In doing so, AI is not only a gamechanger for strategy, but also in how humans and machines work together seamlessly. If that collaboration is designed correctly, AI can have a huge impact on human potential. 

Artificial Intelligence and trust

 

Realising these lofty promises hinges on the trust of employees, customers, stakeholders and society at large. That trust does not come naturally. Our annual surveys of social sentiment show that society is critical of the use of algorithms and/or AI by companies and governments. There is even distrust in some parts.

Creating value responsibly with AI?

In this field of competing forces, we help organisations create value responsibly with AI. Important building blocks for that trust are transparent practices, well-considered ethical considerations and regulatory compliance. 

Developers technology

This is partly the responsibility of the developers of the technology – the data scientists or programmers. They must work in an environment where quality and compliance with relevant laws and regulations are a given and are tightly monitored.

Management

However, to another extent, the responsibility lies with management, especially when it comes to the ethical issues surrounding the deployment of AI. Management is faced with the task of defining the risk appetite around AI and, above all, clearly formulating what the organisation stands for. What moral principles do you give to developers and how do you make them concrete? This involves purely business decisions – such as pricing – but also ethical considerations, such as what is and is not morally acceptable. These decisions have traditionally been made by people but are now increasingly being ‘hard’ programmed into systems. This can only be done responsibly if rigid standards (such as 'definition of success' and 'risk appetite') are carefully defined. This requires, among other things, unerring insights the market and society, but also the ability to practically translate the choices made by the business to the developers of applications, so that they can develop 'trusted technology'.

Trusted AI with KPMG

KPMG has in-depth knowledge and extensive experience to help clients achieve this. We help them at every step from considering the strategic issues surrounding AI to the practical implementation. Our Trusted AI professionals focus on responsible application with attention to trust, compliance, security, privacy and ethics.