Artificial intelligence (AI) is rapidly becoming a core enabler across organizations, driving transformation in business models, operations, and decision-making processes. From virtual assistants and autonomous agents to predictive systems and optimization models, AI is helping reshape core business processes in areas such as operations, customer service, HR, and marketing.
A recent study, led by University of Melbourne in collaboration with KPMG, reveals that AI adoption is on the rise, but trust remains a critical challenge, reflecting a tension between the benefits and risk.
However, this transformation brings new risks. The fast adoption of AI models, especially autonomous agents that make decisions or interact with systems, introduces new attack vectors. A poorly configured agent or one trained on biased data can expose systems to threats or make decisions that go against the organization’s security policies.
Also, AI is enhancing traditional threats and vulnerabilities, transforming them into a less predictable and rapidly growing risk. Given this situation, a new challenge arises, addressing a fundamental duality: how to ensure the security of AI systems, while also using AI, into specific fields, such as cybersecurity itself. As highlighted by the KPMG report, Cybersecurity considerations 2025 report, the foundational principles of cybersecurity — such as embedding trust and the benefits of integrating AI into cyber and privacy, are even more critical.
Successfully navigating this duality — leveraging AI to protect, while protecting against the risks of AI — are expected to be central to modern cybersecurity strategies.