Artificial intelligence (AI) is rapidly becoming a core enabler across organizations, driving transformation in business models, operations, and decision-making processes. From virtual assistants and autonomous agents to predictive systems and optimization models, AI is helping reshape core business processes in areas such as operations, customer service, HR, and marketing.

      A recent study, led by University of Melbourne in collaboration with KPMG, reveals that AI adoption is on the rise, but trust remains a critical challenge, reflecting a tension between the benefits and risk.

      However, this transformation brings new risks. The fast adoption of AI models, especially autonomous agents that make decisions or interact with systems, introduces new attack vectors. A poorly configured agent or one trained on biased data can expose systems to threats or make decisions that go against the organization’s security policies.

      Also, AI is enhancing traditional threats and vulnerabilities, transforming them into a less predictable and rapidly growing risk. Given this situation, a new challenge arises, addressing a fundamental duality: how to ensure the security of AI systems, while also using AI, into specific fields, such as cybersecurity itself. As highlighted by the KPMG report, Cybersecurity considerations 2025 report, the foundational principles of cybersecurity — such as embedding trust and the benefits of integrating AI into cyber and privacy, are even more critical.

      Successfully navigating this duality — leveraging AI to protect, while protecting against the risks of AI — are expected to be central to modern cybersecurity strategies.

      AI frustrations – and threats

      Most software solutions now have inbuilt AI, primarily in the form of conversational chatbots. While these can add speed and convenience, they can also make life extremely complex for security professionals, who are faced with ‘prompt hopping’- rapidly switching between multiple conversations to monitor and manage different systems. This isn’t just a usability challenge: as prompt engineering becomes more sophisticated and attackers experiment with ‘prompt injection’, each conversation represents a potential vector for manipulation and misconfiguration. With various AI tools at play, cybersecurity teams may suffer from ‘conversational fatigue’, which can hinder their ability to act quickly and decisively in certain cases.

      Uncontrolled adoption of AI can also lead to uncontrolled data exposure. AI tools may expose confidential data on the organization and its customers, which could fall into the hands of third parties. As well as raising the vulnerability of attacks, data security lapses may also breach regulations and lead to fines and loss of stakeholder trust. KPMG’s survey revealed that one of the top two pain points for security leaders are issues with data quality or lack of completeness (30%).1

      Many organizations also experience a disconnect between enterprise-level AI adoption, cybersecurity team involvement, and product-specific AI solutions. These various resources are often not integrated, causing siloed conversational AI chatbots. It’s hard to apply a consistent cybersecurity approach when so many different, fragmented AI tools are being used, possibly giving varying messages about threats and incidents.

      AI is not just being used by organizations to shore up cybersecurity; bad actors are also leveraging this technology to carry out attacks. As soon as they spot vulnerability, they can quickly generate attack vectors and codes. Three-quarters (76%) of the survey respondents say they’re concerned about the increasing sophistication of new cyber threats and cyberattacks.2

      On the other side, AI enhancing cybersecurity

      As AI has the potential to accelerate and amplify cyber risk, it becomes essential to invest and enhance cyber solutions with AI tools which can match speed, scale and sophistication of emerging threats. The following examples address the needs of strengthening cybersecurity posture while also driving greater efficiency and return on investment:

      • Advanced threat detection through behavioral analytics: AI analyzes vast amounts of data to detect patterns and anomalies that may indicate malicious activity. Unlike traditional rule-based systems, it can identify subtle or previously unknown threats in real time, helping reduce attacker dwell time and prevent damage earlier in the attack chain.
      • Identity and access management automation: AI enhances access control by continuously evaluating user and non-human entity behavior, adjusting permissions dynamically based on context and risk. This helps reduce manual workload, minimize human error, and improve protection against identity-based attacks.
      • AI-assisted incident response: AI helps security teams prioritize alerts, correlate data from multiple sources, and even trigger automated containment actions. This can significantly reduce mean time to detect and respond, increasing the effectiveness of limited security resources.
      • Third-party risk and compliance assessments: AI accelerates the evaluation of external partners by reviewing documents, certifications, and behaviors at scale. It helps identify compliance gaps and security risks quickly, helping streamline due diligence and reduce exposure to supply chain vulnerabilities.

      This invites us to reflect on how these AI-driven capabilities can be effectively structured and integrated into a detailed cyber managed services framework, enabling organizations to enhance their security outcomes through professional guidance and continuous technological support.

      How can organizations integrate AI into their security operations without adding further complexity or risk?

      Three steps towards joined-up, AI-enabled cybersecurity

      Step 1

      Establish an AI security governance framework that enables Chief Information Security Officers (CISOs) and other leaders to control which AI models are adopted within the enterprise, and from each software solution. This avoids adopting too many different automation tools and technologies at once and helps select those most appropriate for the organization’s needs. KPMG professionals’ global approach relies on the Trusted AI framework concept, proposing an AI governance framework built on 10 pillars—one of which is security. Within this context, the main challenges in AI security range from the implementation of this control framework to the development of protection mechanisms for the lifecycle of these systems, their secure integration with architecture and data, and the review of third-party AI components.

      Step 2

      AI should not be adopted in siloes by different parts of the organization, as it often creates duplication and complexity. Rather than reinventing the wheel, cyber security professionals should work with enterprise teams to adopt an enterprise AI framework. By integrating the various cybersecurity solutions into enterprise solutions, using a common AI ‘engine’, all conversational prompts should be coordinated through one platform – helping reduce or even potentially eliminate ‘prompt hopping’. And, as agentic AI becomes more mature, automated AI agents can start to connect with software-specific agents to get the desired outcomes. All of which gives a clearer, real-time, firm-wide view of threats as they move within the organization, helping to drive rapid containment.

      Step 3

      Create guardrails within the AI framework to help protect data against unintentional exposures. By using enterprise AI, organizations can develop customized models based on their own, proprietary data, rather than relying on the public data that feeds most large language models. These consume significantly less processing power, and give more reliable responses, because they integrate private data sets and internal technical solutions.


      Great power comes with a great responsibility

      AI represents an unprecedented opportunity to transform business processes and strengthen cybersecurity capabilities. But at the same time, it introduces complex risks and new attack surfaces that cannot be ignored. This duality requires organizations to act responsibly, with proper governance frameworks, technical controls, continuous training, and continuous evaluation of AI components.

      If organizations can align AI-driven security initiatives with broader business, risk and security strategies, they can unlock the immense potential of AI in cyber security. This can help them create a secure environment, building trust and fostering innovation.

      1 'The time to transform is now, KPMG Security Operations Center Survey 2024’, KPMG in the US, 2024.

      2 'The time to transform is now, KPMG Security Operations Center Survey 2024’, KPMG in the US, 2024.

      Related content

      In an AI-dominated business environment, the foundational principles of cybersecurity are even more critical.

      This study offers a data-driven view of where leadership focus, strategic investment and workforce enablement are most urgently needed and how rising public expectations for strong AI regulation and governance need to be addressed.

      AI is transforming the world in which we live—impacting many aspects of everyday life, business and society.

      Contact us

      Javier Aznar

      AI Security Global Lead, Cyber Technology Risk Partner

      KPMG in Spain