Skip to main content

      What's the issue

      When KPMG Belgium's Board Leadership Center first published this article in October 2023, Generative AI had just entered the mainstream. ChatGPT had been public for less than a year. The question boards were asking - what does this mean for us? - felt urgent but still abstract.

      Two-and-a-half years on, it is no longer abstract. AI is embedded in operations, hiring processes, financial modelling, and customer interactions across Belgian industry, often without explicit board approval. Agentic AI systems, which plan and act with growing autonomy rather than simply responding to prompts, are entering enterprise environments in financial services and large industrials. The governance implications are vastly different from earlier AI tools.

      The more important shift is not in the technology as such, but in what is expected of boards themselves.

      In 2023, it was reasonable to say that boards did not need deep technical knowledge - just enough to ask the right questions. That bar has moved. Boards are now expected to exercise genuinely informed oversight: to distinguish credible AI strategy from reassuring-sounding plans, to challenge risk frameworks with substance, and to ensure that governance keeps pace with what is being deployed. Several governance codes are beginning to make this expectation explicit. Passive oversight is no longer adequate.

      To support boards in meeting this standard, KPMG developed - in collaboration with INSEAD's Corporate Governance Centre, and with input from over 25 experienced board members across Europe, the Americas, and Asia - a set of five AI Governance Principles for Boards. These principles structure the questions and actions in this article.

      regulation & frameworks

      In 2023, the EU AI Act was still being negotiated. It is now law, with obligations phasing in through 2027:

      • February 2025: Bans on unacceptable-risk applications took effect, social scoring, certain real-time biometric surveillance.
      • August 2025: Rules for general-purpose AI models became applicable. Organizations deploying or fine-tuning foundation models need to understand their obligations, though implementing guidance from the European AI Office is still being finalized.
      • August 2026: The main body of the regulation, including requirements for high-risk AI systems, becomes fully applicable.
      • August 2027: Rules for AI embedded in regulated products (medical devices, machinery) take full effect.

      For most Belgian boards, the practical question is whether the organization deploys - or plans to deploy - high-risk AI as defined in Annex III: systems used in HR and recruitment, credit assessment, critical infrastructure, or education, among others. High-risk systems carry substantial conformity, transparency, and human oversight obligations. A proposed AI Liability Directive was formally withdrawn by the European Commission in early 2025 after failing to reach agreement; AI-related liability exposure remains governed by existing national frameworks for now.

      On standards, the picture is clearer than it was. ISO/IEC 42001 now provides a certification-ready AI management system framework, analogous to ISO 27001 for information security. The NIST AI Risk Management Framework offers a practical, non-prescriptive alternative many organizations are adopting regardless of regulatory requirement. Boards do not need to master these in detail, but they should expect management to explain which framework underpins their governance approach, and why.

      Boardroom questions

      The questions below follow the five KPMG-INSEAD AI Governance Principles. They are meant as an example to help sharpen boardroom dialogue, but do not cover every scenario.

      1. Strategic oversight - long-term value creation
      • Is management's AI ambition genuinely strategic - or is it primarily a cost-reduction program with an AI label on it? What does our competitive position look like in five years if a new entrant uses AI more boldly than we do?
      • Are we making the foundational investments - in data, infrastructure and talent - that real AI capability requires? Or are we chasing near-term results in ways that create dependency and fragility?
      • How does management assess AI maturity in acquisition targets? And could our own AI practices withstand the scrutiny of a potential acquirer or investor?
      2. Technology and security oversight
      • Which capabilities are we building, which are we buying, and which are we handing to third-party platforms? What are the lock-in risks, and what happens if a key provider fails, pivots, or falls foul of regulators?
      • Are we deploying agentic AI anywhere in the organization? If so, what can these systems do without human approval, and on what basis was that line drawn?
      • Are our business continuity plans updated for AI-specific threats: data poisoning, deepfakes, model manipulation, unauthorized employee use?
      3. Workforce transformation and human accountability
      • For which decisions are human oversight non-negotiable - and is that enforced in practice, not just written into policy?
      • Where AI supports high-impact decisions - in HR, credit, operational risk - can management explain those decisions to the people they affect? Is explainability built into deployment from the start, or added later?
      • What is the honest picture on workforce impact, not retraining budgets, but which roles change or disappear, and when? Is the board receiving reporting on this that surfaces problems early?
      4. Building trustworthy AI
      • How are AI systems monitored for bias and unintended effects after deployment, not just before go-live? Who is accountable and how are findings escalated to the board?
      • For Belgian listed companies subject to CSRD: AI infrastructure and its energy consumption are increasingly material to Scope 3 disclosures. Is management tracking and reporting this with the same rigor as other ESG factors?
      • Is the company's AI policy - and the board's oversight of it - communicated clearly to stakeholders? Is that communication something the board would stand behind publicly?
      5. The board itself

      This is where the most has changed since 2023 and where Belgian boards tend to have the most ground to cover:

      • Does the board have enough AI knowledge to push back on management with substance, not just ask process questions? If not, what is the specific plan to build it, and by when?
      • Which committee owns AI risk? Is that clear in the mandate? How does it connect to audit, remuneration and strategy - given that AI cuts across all three?
      • Does the board have a policy on its own use of AI, for processing board papers, scenario analysis, and information gathering? Are the confidentiality implications understood?
      • What is the board's personal exposure if AI-related harm materializes? Governance frameworks are increasingly attaching director accountability to inadequate oversight of high-risk systems. Is the current level of oversight adequate, or does it only look adequate from the inside?
      Boardroom actions

      The actions boards need to take have not changed dramatically since 2023. What has changed is the standard to which they need to be done.

      Invest in real fluency

      A one-day seminar followed by a quarterly AI update is no longer sufficient. Boards that have not invested seriously in structured AI education since 2023 are behind, and the gap is widening. Effective programs include hands-on engagement with AI tools and scenario-based exercises built around the specific governance dilemmas the company faces. The KPMG-INSEAD AI Governance Principles provide a practical framework around which board education can be structured. KPMG Belgium's Board Leadership Center can support this.

      Push management beyond efficiency

      Many management teams still present AI in terms of cost savings and productivity gains. That is a starting point, not a strategy. Boards should press for the harder question: what does this organization look like in five years if we get AI right - and what does it look like if a competitor gets there first? The answer should drive investment decisions, not the other way around.

      Test governance in practice, not on paper

      Ask management to show the board how AI governance actually works, not the policy document, but the process. Who approves a new use case? Who monitors it after deployment? What happened the last time something went wrong? For high-risk systems under the EU AI Act, can management demonstrate readiness? If the answers are thin, that is a finding that should be tackled.

      Update structures and keep watching

      Check whether committee mandates still reflect AI's current risk profile. Review whether board composition gives sufficient weight to technology expertise, not at CTO level, but enough for genuine informed challenge. And set a small number of specific, board-owned indicators for AI progress and risk that the board returns to at every cycle: not a management dashboard, but questions the board itself tracks over time.



      About the Board Leadership Center

      KPMG's Board Leadership Center (BLC) offers non-executive and executive board members - and those working closely with them - a community of board-level peers and a program of insights, seminars and Board Academy sessions on the issues shaping governance today. 



      Contact our Board and AI experts

      Olivier Macq

      Partner, Chairman Board Leadership Center | Audit

      KPMG in Belgium

      Peter Van den Spiegel

      Partner, Head of Lighthouse | Advisory

      KPMG in Belgium


      Stay informed

      Be the first to know about top business trends that can drive success for your company.

      stay informed