• 1000

Whether deepfake videos with misleading statements from candidates or deliberately launched misinformation that goes viral at breakneck speed: The US presidential election campaign shows just how effective generative artificial intelligence (AI) already is. AI makes it possible to manipulate content. This can cause fundamental doubts to grow among the population, threatening to undermine the election process. This makes it clear once again that the transparent, secure and ethically justifiable use of AI is essential in order to ensure trust and ultimately utilise the great opportunities offered by the new technology. In the study "Fake World vs Trusted AI", our experts show how the KPMG Trusted AI Framework supports the use of AI in practice.

Risiken beim KI-Einsatz in Unternehmen

In order to determine the status quo with regard to the perception of AI among the population, we surveyed 1,000 people on relevant topics. Five key findings in a compact overview:

  • More than half of the people surveyed already actively use AI in a professional or private context. Almost nine out of ten respondents want to further expand their use in the future and deepen their understanding of specific AI solutions.
  • 96 per cent of respondents are aware of the potential AI risks. In their opinion, the greatest risks are: security risks (54 per cent), misinformation and disinformation (49 per cent), loss of jobs (49 per cent).
  • According to the respondents, companies should take comprehensive measures to manage AI risks. They consider employee training and awareness campaigns to be particularly important (42 per cent). Almost as relevant: the introduction of usage guidelines and restrictions on use (41 per cent), continuous monitoring and evaluation through monitoring systems and feedback loops (40 per cent), the development of transparency and explainability (39 per cent) and the implementation of data protection guidelines and security measures (38 per cent).  
  • Respondents also expect companies to have comprehensive governance structures. The top 3: the introduction of an AI data protection and security officer (51 percent), AI training and further education programmes (47 percent) and the establishment of an AI risk management team (40 percent).
  • Almost all respondents (96 per cent) recognise the social challenges posed by the use of generative AI. The two biggest are security concerns with regard to cyber attacks and misuse (55 per cent) and challenges relating to data protection and monitoring (54 per cent).

One thing is clear: AI has arrived in people's everyday lives and further growth is on the horizon. At the same time, in view of the far-reaching consequences that AI can have for the economy and society, there are expectations of companies and other organisations that use AI: Risk management needs to be re-sharpened in order to realise the full potential of technological progress.



Download study now

Fake World vs. Trusted AI

Download



KPMG Trusted AI Framework: minimising risks, strengthening trust

The KPMG Trusted AI Framework offers seven pillars for the safe, trustworthy, transparent and ethical use of AI. A compact overview of the pillars:

AI decisions should be comprehensible. Explainable AI methods (XAI) strengthen trust and promote informed use. Ethics guidelines and boards are also important.

Bias detection algorithms and regular audits minimise bias in AI systems. Diverse data sets ensure fair decisions.

Strict data protection rules such as anonymisation and encryption protect sensitive data. Standards such as the "AI Cloud Service Compliance Criteria Catalogue (AIC4)" help to assess the security of AI services.

Validation and cleansing of data are crucial. Blockchain technology protects data integrity and continuous monitoring recognises anomalies at an early stage.

Documentation and audits create transparency. Companies should define ethical standards and liability models for AI systems.

AI systems should be stable and resistant to cyber attacks. Redundancy systems and stress tests are essential.

Efficient algorithms, renewable energy and model compression reduce energy consumption. Retraining programmes help employees to adapt to new roles.

AI safety in practice: case studies

Innovative measures are required to ensure the integrity and security of your AI systems. Two projects in the financial services sector show how AI-specific attacks can be defended against:

  • Prompt Injection Detection Firewall: this firewall protects AI-supported applications, in particular Large Language Models (LLMs), from manipulated input. It continuously monitors all user input and recognises and blocks potentially malicious instructions. An example from property insurance shows how the firewall prevents unauthorised payouts and ensures the integrity of decisions.
  • Fully automated security tests of AI systems: A tool performs regular, automated tests to identify and fix vulnerabilities in AI systems. It is updated daily with the latest cyber threat intelligence information and integrates further findings. This ensures that the systems are always up to date with the latest security research and are prepared for new attack methods.

Your contact persons

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today

Connect with us