Deploying Gen AI should not be radically different from implementing standard software tools. Much like other technologies, it carries risks that businesses must carefully evaluate and mitigate. The upcoming ISO/IEC 42005 document for AI system impact assessment offers useful guidance on how to evaluate the potential impact of AI on the organisation and its stakeholders.
Furthermore, organisations must decide the degree of human oversight required in Gen AI use cases. Singapore’s Model AI Governance Framework provides a useful structure by categorising oversight into three levels: human-in-the-loop, human-out-of-the-loop and human over-the-loop. Determining which to use is a matter of balance—outcomes with a major impact could see more involved human oversight even though faster straight-through decision-making is not possible. Which option to choose should be made by cross-functional teams that assess risks and recommend controls.
Looking ahead, the emergence of Agentic AI has the potential to transform operations even further. Agentic AI, when embedded in businesses, has the ability to mature beyond content generation to include reasoning and decision-making. This demands heightened governance to manage its influence on business processes including ensuring resilience in multi-agent environments and equipping organisations to investigate and respond to incidents effectively.
As with today’s Gen AI, the key to success lies in a consistent, risk-based approach to deployment combined with robust cybersecurity. By balancing innovation with caution, organisations can harness Gen AI’s potential while minimising exposure to its risks.