When it comes to challenges, perhaps the greatest one is the fact that AI is evolving so fast that risk management will need to be seen as a moving target. This puts organisations in a quandary: Fail to adopt AI quickly enough and they fall behind their competitors; press ahead too fast and they could encounter ethical, legal and operational pitfalls.
The balance to be struck, then, is tricky, and this applies not just to business behemoths but to firms large and small in every industry, where deploying AI into core business operations is becoming routine. How, then, can organisations manage the risks better without slowing down innovation or being overly prescriptive?
This is where standardisation efforts such as the ISO/IEC 42001:2023 provides guidance for organisations to establish, implement, maintain, and continually improve an Artificial Intelligence Management System (AIMS). Developed by the ISO/IEC JTC 1/SC 42 subcommittee for AI standards, which has 45 participating member nations, it represents a global consensus and provides organisations with a structured approach to manage the risks associated with deploying AI.
Rather than being tightly coupled with a specific technology implementation, such guidance emphasises setting a strong “tone from the top” and implementing a continuous risk assessment and improvement process—aligning with the Plan-Do-Check-Act model to foster iterative, long-term risk management rather than one-time compliance. It provides the framework for organisations to build the necessary risk management components that takes into consideration the scale and complexity of their implementations.
Being a certifiable standard, ISO/IEC 42001:2023 is also verifiable. Organisations can become formally certified (as KPMG Australia did in 2024, a world-first for a professional services firm) or simply adhere to it as best practice. Either way, they can demonstrate to stakeholders the continued efforts to manage risks associated with their adoption or development of AI solutions.