AI success starts with the right foundations
“How do we use AI within our organization?” It’s a question many organizations struggle with—especially since the rise of publicly available tools like ChatGPT, CoPilot, and DeepSeek. Artificial Intelligence offers countless opportunities, such as increased productivity, greater efficiency, and new forms of creativity. At the same time, the technology introduces new challenges and risks in areas like privacy, ethics, and security. Only when organizations and their employees engage with these risks in a conscious and mature way can AI truly add value. To use AI safely and effectively, organizations must invest in clear governance, awareness, AI skills, and training.
Rapid AI development increases risk exposure
The use of AI within organizations is growing rapidly, but control over it is lagging behind. A recent global survey by KPMG on trust and attitudes toward AI use found that 58% of employees regularly and consciously use AI tools in their work. Of these, 70% use freely available, public AI tools. Nearly half of these users admitted to entering company information—such as financial data, sales figures, or customer details—potentially exposing their employer to risks like data breaches, reputational damage, and non-compliance.
Technology is not standing still—far from it. Experts expect a widespread breakthrough of so-called AI agents in 2025: autonomous systems that perform tasks, make decisions, and communicate with other software. These tools will unlock many new use cases and productivity gains, but they also increase the overall risk profile of organizations—with complex risks such as excessive autonomy and the loss of human oversight. Without targeted investment in governance, control mechanisms, and the development of AI skills, it will become increasingly difficult to use this technology responsibly.
Yet the same survey shows that nearly two-thirds of employees have never received any AI training. This creates a worrying gap: AI use is growing, but ‘AI literacy’ is lagging—despite new European AI legislation requiring organizations to provide appropriate training and awareness around AI use, especially for high-risk applications. In terms of governance and oversight, many organizations are still underprepared. Only 55% of employees in advanced economies believe their organization has sufficient guidelines, controls, or safeguards in place to ensure responsible AI use. The future of work is changing rapidly, and organizations must adapt to remain both relevant and secure.
Responsible AI Use
At KPMG Responsible AI, we help organizations shape the AI revolution responsibly. This starts with harnessing the transformative power of AI—without losing sight of the risks. We support organizations in the controlled implementation of data analytics and AI, with attention to ethics, safety, and compliance. Our services include developing AI policies and governance, conducting AI assurance, supporting compliance with regulations such as the AI Act, and increasing AI literacy through masterclasses and training. In doing so, we provide organizations with insights, build trust in AI, and enable them to unlock the full potential of this technology.
AI can make organizations smarter, more efficient, and more innovative—provided it is used consciously, collaboratively, and responsibly. By strategically focusing on governance, skills, and ethics, AI becomes more than just a hype to follow blindly—it becomes a deliberate choice that enables organizations to create lasting value.