AI brings new tools and new risks
Cyber threat actors are increasingly using AI to target and tailor their attacks and search for new vulnerabilities. They’re producing sophisticated deepfakes of images, videos and voice, using those to trick people into helping them. For example, users have been targeted by sophisticated AI generated emails and voice calls in an attempt to compromise their email accounts. Call centres at banks have also been targeted in an attempt to get customer information. These types of attacks are expected to increase considerably, so management will need to ensure they’re keeping up to date on the latest attack techniques and the many tools and services that are available to detect and mitigate them.
Sophisticated tools are needed to combat these attacks. While cyber threat actors are increasingly using AI, organizations are also applying AI to cyber defence. AI can sift through massive data sets in real time, derive actionable insights and be trained to take automatic defensive actions. It’s being used to improve incident detection, assess vulnerabilities, manage access and assess third-party risks. However, AI comes with its own set of risks and creates a new avenue of attack for threat actors. Audit committees must ensure their organizations are using AI safely and securely and mitigating newly introduced privacy, reputational, regulatory and cyber security risks.
In our 2024 CEO Outlook, 80 per cent of Canadian CEOs agree that building a cybersecurity-focused culture is central to how they integrate AI in their organization.1
AI must be specifically designed for the cyber security task being performed, and only high-quality data should be used to train the models. Robust data integrity and privacy protocols must be in place, and access to the data and algorithms must be controlled. Audit committees should question management on how they’re dealing with the unauthorized and ungoverned use of AI by individuals in the workplace and how they’re keeping track of and complying with the myriad evolving regulations governing AI. To develop secure AI applications, organizations will need to upskill or outsource, and so will audit committees tasked with ensuring management has appropriately evaluated AI security.