When KPMG Belgium's Board Leadership Center first published this article in October 2023, Generative AI had just entered the mainstream. ChatGPT had been public for less than a year. The question boards were asking - what does this mean for us? - felt urgent but still abstract.
Two-and-a-half years on, it is no longer abstract. AI is embedded in operations, hiring processes, financial modelling, and customer interactions across Belgian industry, often without explicit board approval. Agentic AI systems, which plan and act with growing autonomy rather than simply responding to prompts, are entering enterprise environments in financial services and large industrials. The governance implications are vastly different from earlier AI tools.
The more important shift is not in the technology as such, but in what is expected of boards themselves.
In 2023, it was reasonable to say that boards did not need deep technical knowledge - just enough to ask the right questions. That bar has moved. Boards are now expected to exercise genuinely informed oversight: to distinguish credible AI strategy from reassuring-sounding plans, to challenge risk frameworks with substance, and to ensure that governance keeps pace with what is being deployed. Several governance codes are beginning to make this expectation explicit. Passive oversight is no longer adequate.