Global study reveals trust of AI remains a critical challenge reflecting tension between benefits and risks


Key findings:

  • The intelligent age has arrived – 66% of people use AI regularly, and 83% believe the use of AI will result in a wide range of benefits.
  • Yet, Trust remains a critical challenge: only 46% of people globally are willing to trust AI systems.
  • There is a public mandate for national and international AI regulation with 70% believing regulation is needed.
  • Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).

A global study on trust in Artificial Intelligence (AI) released today reveals more than half of people globally are unwilling to trust AI, reflecting an underlying tension between its obvious benefits and perceived risks.

The Trust, attitudes and use of artificial intelligence: A global study 2025 led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne and Dr Steve Lockey, Research Fellow at Melbourne Business School, in collaboration with KPMG, is the most comprehensive global study into the public’s trust, use and attitudes towards AI.

The study surveyed over 48,000 people across 47 countries between November 2024 and January 2025.

It found that although 66% of people are already intentionally using AI with some regularity, less than half of global respondents are willing to trust it (46%).

When compared to the last study of 17 countries conducted prior to the release of ChatGPT in 2022, it reveals that people have become less trusting and more worried about AI as adoption has increased.

The public’s trust of AI technologies and their safe and secure use is central to sustained acceptance and adoption. Given the transformative effects of AI on society, work, education, and the economy—bringing the public voice into the conversation has never been more critical.

Nicole Gillespie

Chair of Trust and Professor of Management, Melbourne Business School

University of Melbourne


AI at work

The age of working with AI is here, with three in five (58%) employees intentionally using AI – and a third (31%) using it weekly or daily.

This high use is delivering a range of benefits with most employees reporting increased efficiency, access to information and innovation. Almost half (48%) report AI has increased revenue-generating activity.

However, the use of AI at work is also creating complex risks for organizations. Almost half of employees admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT. 

Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).

What makes these risks challenging to manage is over half (57%) of employees say they hide their use of AI and present AI-generated work as their own.

This complacent use could be due to governance of responsible AI trailing behind. Only 47% of employees say they have received AI training and only 40% say their workplace has a policy or guidance on generative AI use.

It may also reflect a sense of pressure, with half concerned about being left behind if they do not use AI.

“The findings reveal that employees use of AI at work is delivering performance benefits but also opening up risk from complacent and non-transparent use. They highlight the importance of effective governance and training, and creating a culture of responsible, open and accountable AI use.” — Nicole Gillespie

AI in society

Four in five people report personally experiencing or observing benefits of AI, including reduced time spent on mundane tasks, enhanced personalization, reduced costs and improved accessibility. 

However, four in five are also concerned about risks, and two in five report experiencing negative impacts of AI. These range from a loss of human interaction and cybersecurity risks through to the proliferation of misinformation and disinformation, inaccurate outcomes, and deskilling. 64% of people are concerned that elections are manipulated by AI-powered bots and AI-generated content.

70% believe AI regulation is required, yet only 43% believe existing laws and regulation are adequate.

There is a clear public demand for international law and regulation and for industry to partner with government to mitigate these risks. 87% of respondents also want stronger laws to combat AI-generated misinformation and expect media and social media companies to implement stronger fact-checking processes. 

“The research reveals a tension where people are experiencing benefits from AI adoption at work and in society, but also a range of negative impacts. This is fuelling a public mandate for stronger regulation and governance of AI, and a growing need for reassurance that AI systems are being used in a safe, secure and responsible way.” — Nicole Gillespie

KPMG International’s Global Head of AI David Rowlands said the report highlighted opportunities for organizations to lead the way in providing greater governance and taking a proactive approach to building trust with employees, customers and regulators.

It is without doubt the greatest technology innovation of a generation and it is crucial that AI is grounded in trust given the fast pace at which it continues to advance. Organizations have a clear role to play when it comes to ensuring that AI is both trustworthy and trusted. People want assurance over the AI systems they use which means AI’s potential can only be fully realized if people trust the systems making decisions or assisting in them. This is why KPMG developed our Trusted AI approach, to make trust not only tangible but measurable for clients.

David Rowlands

Global Head of Artificial Intelligence

KPMG International


Emerging economies lead the way

People in emerging economies report higher adoption of AI both at work and for personal purposes, are more trusting and accepting of AI, and feel more optimistic and excited about its use, compared to advanced economies.

They also self-report higher levels of AI literacy (64% vs 46%) and training (50% vs 32%) and importantly, more benefits from AI (82% vs. 65%), compared to people in advanced economies.

In emerging countries three in five people trust AI systems, while in advanced countries only two in five trust them.

“The higher adoption and trust of AI in emerging economies is likely due to the greater relative benefits and opportunities AI affords people in these countries and the increasingly important role these technologies play in economic development.” — Nicole Gillespie


For interviews and media opportunities, please contact:

Daniel Caines
Senior Manager, Global External Communications, KPMG International

T: +44 7732400262
E
Daniel.Caines@kpmg.co.uk


Alison Bottcher
Communications Manager, Melbourne Business School

T: +61 405 812 602
E
a.bottcher@mbs.edu

About this report

The University of Melbourne research team, led by Professor Nicole Gillespie and Dr Steve Lockey, independently designed and conducted the survey, data collection, analysis, and reporting of this research.

This study is the fourth in a research program examining public trust in AI. The first focused on Australians’ trust in AI in 2020, the second expanded to study trust in five countries in 2021, and the third surveyed people in 17 countries in 2022.

This research was supported by the Chair in Trust research partnership between the University of Melbourne and KPMG Australia, with funding from KPMG International, KPMG Australia, and the University of Melbourne. 

About KPMG International

KPMG is a global organization of independent professional services firms providing Audit, Tax and Advisory services. KPMG is the brand under which the member firms of KPMG International Limited (“KPMG International”) operate and provide professional services. “KPMG” is used to refer to individual member firms within the KPMG organization or to one or more member firms collectively.

KPMG firms operate in 143 countries and territories with more than 275,000 partners and employees working in member firms around the world. Each KPMG firm is a legally distinct and separate entity and describes itself as such. Each KPMG member firm is responsible for its own obligations and liabilities.

KPMG International Limited is a private English company limited by guarantee. KPMG International Limited and its related entities do not provide services to clients.

For more detail about our structure, please visit kpmg.com/governance.

David Rowlands

Global Head of Artificial Intelligence

KPMG International


Nicole Gillespie

Chair of Trust and Professor of Management, Melbourne Business School, University of Melbourne

KPMG Australia