We have been hearing a lot about the new security risks created by AI and GenAI of late. That is quite understandable, but while it is certainly true that the technology does come with risks, it doesn’t in itself constitute a new security risk. It simply alters existing risks but the fundamentals of how we approach those risks haven’t really changed. As with so much else in the cybersecurity realm, humans remain the best firewalls, says Dani Michaux and Jackie Hennessy, KPMG.
When considering AI security, the first step is to understand what AI is and where and how it is being used in the organisation and to what purpose. If we can’t answer these questions, we can’t begin to secure it.
Unfortunately, as things stand there is no common understanding or definition for what AI is. That does make it difficult to put the necessary controls and governance in place.
Added to these gaps in understanding is the difficulty in defining exactly what an AI data breach is.
That brings us back to the first principles of security. Guardrails proportionate to the technology and its use must be put in place. And that begins with purpose.
The nature of the security measures will change according to what the AI is being used for. A useful analogy is a car. Not all cars are the same. Sports cars are very different to family saloons and racing drivers are not the same as normal users. The purpose changes the nature of the vehicle and that dictates the safety controls and measures to be applied.