Artificial intelligence is changing organisational risk profiles faster than leadership behaviour can adapt. KPMG’s Dani Michaux explains how to build cyber resilience, and why leaders need to treat AI as a people risk, putting clear governance around how it is used across the organisation.
Article highlights:
The cyber issue leaders are facing right now
Over the past 12 months, AI has rapidly become embedded in everyday productivity tools, putting experimentation at the fingertips of most team members - often quietly, and often without explicit leadership direction. As a result, many organisations now face a growing gap between how AI is actually being used and how leaders believe risk is being managed.
As that gap widens, many organisations are also rethinking governance, oversight and adoption choices through AI consulting alongside cyber risk management.
That gap matters. Because while AI is often discussed as a technology issue, the most significant risks emerging today are not purely technical. They are behavioural, cultural and organisational. And they are already affecting organisations of all sizes.
What is shadow AI?
And why does it matter for cyber resilience?
AI is no longer sitting at the edges of the organisation. It is writing code, analysing data, supporting decisions and automating activities that were previously human-led. In many cases, it has become part of “how work gets done” before organisations have fully agreed how it should be used, challenged, or governed.
Shadow AI and blind spots
This spread of unsanctioned AI use (sometimes called shadow AI) creates blind spots that are difficult to detect until risk has already materialised.
Many organisations are responding by strengthening oversight through cybersecurity consulting that addresses AI-enabled threats alongside existing vulnerabilities.
This changes the nature of risk. Exposure is no longer confined to system failures or cyber vulnerabilities. It now increasingly sits in how people trust AI outputs, where over-reliance can quietly replace critical thinking, and fundamental reasoning is applied less rigorously than before. You may want to ask: where could this be happening today, without anyone in your organisation intentionally taking a risk?
Third-party and supply chain exposure
Crucially, this is not just a large corporate problem. Smaller and midsized organisations are often deeply embedded in supply chains and ecosystems. Their perceived size does not reflect their actual importance - or the impact if something goes wrong.
Third-party vendors and partners can introduce additional exposure, and the AI tools they use may extend risk into the organisation in ways that are not always visible.
In cases where organisations are responding well, they tend to engage early - identifying where AI is already in use and addressing cyber risk as it emerges, rather than waiting for an incident to force action.
Why AI risk is now an enterprise-wide resilience issue
AI has become an enterprise-wide resilience issue. Recent findings from KPMG’s Global Tech Report underline this shift: Irish leaders identify cyberattacks as the leading AI related risk today, with concern expected to intensify further over the next two years.
Consider the points below:
How can leaders build and sustain cyber resilience?
Leaders need to get comfortable with three realities:
At its core, this is about how people think, decide and act - not just about controls or compliance. A clear AI governance framework helps ensure that accountability is defined, human oversight is maintained, and risk does not accumulate silently across the organisation.
Embedding governance risk and compliance practices can reinforce that framework and keep accountability visible as AI use expands.
How to build cyber resilience
The actions leaders need to take are practical, not transformational. They are about clarity, consistency and intent.
What are the best leadership practices for cyber resilience?
Organisations getting this right tend to set a clear leadership tone on acceptable AI use.
- They combine technical controls with behavioural reinforcement.
- They treat resilience as something built into everyday decisions, rather than a separate initiative.
- They also avoid over‑reliance on tools as a substitute for judgement.
In practice, that also means putting stronger data protection services around how sensitive information is accessed, processed and retained in AI-supported workflows.
AI supports decision‑making, but it does not replace accountability. Explainability is part of this- leaders should be able to articulate how AI-supported decisions are reached and where human judgement remains the deciding factor. Resilience becomes part of how the organisation operates, not something bolted on after the fact.
AI threats to watch in the coming months
Several trends will intensify pressure on leaders over the coming months. Bad actors are becoming more sophisticated in their use of AI. Regulatory and stakeholder scrutiny is increasing across jurisdictions. In that environment, timely regulatory advisory can help organisations interpret new expectations and translate them into workable controls.
Expectations are rising that organisations can explain how AI‑supported decisions are made.
At the same time, waiting for perfect clarity is becoming a risk in itself. Exposure continues to grow whether leaders act or not.
Building cyber resilience through clear leadership
AI‑related risk is already here. The real question is how deliberately it is managed. You don’t need perfect answers, but you do need to set a clear direction for your team.
Acting now will make you more resilient - not just more compliant. This is less about future strategy and more about leadership choices being made today.
For organisations formalising that response, KPMG in Ireland's AI consulting team can help shape governance, controls and safe adoption across the business.