Deepfakes – synthetic media where a person’s likeness is swapped for another – are not a new concept. Photographic and film manipulation, telephone voice decoders, and image adjustment have been with us for some time. However, the rise of generative artificial intelligence (GenAI) has moved everything much further forward, making tools available at low/no cost, enabling almost anyone with a smartphone and basic understanding to create AI-generated synthetic media. In the hands of bad actors, these technologies can increasingly be used to make deepfakes for fraud, deception, and defamation.
Chief Executive Officers (CEOs), General Counsels and the heads of Information Security, Fraud, Risk, Marketing, all have good reason to be concerned about the rapid spread of deepfake technology. In this article, KPMG professionals seek to examine some of the potential threats of deepfakes and discuss how organizations can shore up their defenses to try to reduce the risks to their business.
Child’s play or malicious weapon? The democratization of deepfake technology
Virtually anyone can make a deepfake, enabled by the accelerating democratization of GenAI, driven by open-source tools, user-friendly platforms, and decentralized access to deepfake technology. This can make it simple and low-cost (or potentially even free) to create audio, video, and text via advanced GenAI techniques, mostly based on large language models (LLM) that consume and classify content and reproduce it in new ways. It's already possible to go online and learn how to make a convincing deepfake, based on a mere three seconds of recorded audio of someone’s voice – using off-the-shelf, publicly available software. On top of this, there is an emergence of "deepfake-as-a-service" as a lucrative market on the dark web.1
Attack surfaces are increasing, in part due to the hybrid work environment where many people are connecting with organizations remotely from homes, coffee shops, airports, gyms and other locations. The use of biometric data for authentication and authorization presents further opportunities to infiltrate organizations, via synthetic voices and images. According to one recent global study, 70 percent of people say they’re not confident they can identify a real versus a cloned voice.2
This gets to the essence of the deepfake threat, which is the human propensity for trust. As a visual and communicative species, people often authenticate, and trust based on what can be seen and heard. But the knowledge that deepfakes are out there could seriously erode trust and necessitate new ways to verify authenticity.
The deepfake threat spectrum
Criminals or other malicious actors can use deepfakes in a number of ways that are potentially damaging, amplifying the costs of fraud, regulatory fines, and data breaches, and eroding trust in brand integrity:
Five practical steps which can help protect against deepfakes
Deepfakes are a major concern and organizations should take appropriate steps to try to protect themselves. But rather than reinvent the wheel, CISOs and CROs should integrate this risk into their organizational cybersecurity strategy, by understanding the threat, the likelihood and impact, and establishing preventive measures and defenses.
To better understand their exposure to deepfake attacks, companies should undertake a broad-ranging ongoing susceptibility assessment of their processes. This involves identifying processes that rely on the ingestion of media (such as automated insurance claims), or picture/video/voice for authorization, and determining the potential impacts of a deepfake attack. Armed with this knowledge, they can then design processes to evaluate these media – either in real time or after a deepfake attack.
Regular audits of digital assets can spot potential misuse – in the same way that companies monitor use of trademarks and patents. However, the pace of deepfake developments and attacks can make it hard to keep up with threats on external-facing platforms, social media and the dark web. Companies should consider working with service providers that specialize in deepfake research and can more effectively monitor fraudulent content. Audits and monitoring should extend to third parties, and supply chain as part of the organization’s vendor risk management. Disinformation is harder to track, as there are so many potential sources of attacks, but, again, it’s important to keep a pulse on activity that could prove damaging.
GenAI may be enabling bad actors, but it is also a vital tool in the quest to detect deepfakes. Over time, organizations could reduce the need for human recognition as deepfake analysis and detection platforms become more common, and processes and architectures are redesigned to incorporate them. There is a growing range of technology options, such as predictive algorithms and anomaly detection to pre-empt deepfake-related attacks. These technologies should support a better defense – rather than just identifying breaches. Adversarial machine learning can train models to not only detect deepfakes, but also better understand possible attacks, pinpointing potential vulnerabilities. Companies can expect to see more collaborative innovations in deepfake detection. To be truly proactive, these detective platforms should be integrated into organizational processes that ingest media and can therefore detect deepfakes before they have any impact. Strengthening identity and access security and controls can make it more difficult for deepfakes to penetrate organizations. Emerging protocols include multi-factor authentication (MFA), out-of-band authentication (secondary verification through a separate communication channel), and behavioral biometrics (identifying people by how they behave). There are also AI-driven solutions that look at potential anomalies in identity and access management.
Digital watermarks act as markers in audio, video or image data, and can identify ownership of the copyright. Liveness detection software can confirm whether the person is real and physically present in front of the camera. And there is considerable potential for blockchain, using immutable content authentication to counter manipulation.
All of this should be part of a zero-trust architecture model based on three key principles: assume nothing, check everything and limit access.
Recognizing deepfakes is going to get harder over time, but people are likely to remain the front-line defense for the time being. Nevertheless, it’s essential to provide regular, scenario-based training for employees, leadership, the board, suppliers and, where possible, customers, to enable them to recognize and respond to deepfake-related threats. Any suspected deepfake content should be reported to IT, with users informed via alternate communication channels.
Deepfake attack simulations can be incorporated into security testing, including annual red teaming – where security professionals carry out an actual simulated attack on a target network – and periodic penetration testing by authorized third parties attempting to hack into the system.
Companies should also publicly publish content, communications and media policies. These state what kind of content can be shared, in what format, and the appropriate communication channels (internal and external).
Given the relative nascence, fast development and use of deepfakes by criminals to perpetrate fraud and other harms, national and global regulations are only gradually emerging. It’s vital to continually monitor regulatory developments and integrate these into national and international operations.
There should be clear guidelines on the use of AI and associated deepfake concepts within the organization, including approved AI tools. For example, these might be used to create training videos without using actors, chatbots for customers, or internal helplines. Compliance measures should be embedded into cyber risk management strategies. Executive passcodes enable senior management to confirm that the person they’re communicating with is genuine and not a deepfake. Safety protocols should include both a ‘safe’ passcode and a ‘duress’ passcode (a covert distress signal to warn others that they are being forced to do something against their will).
Deepfake prevention as an integral part of cybersecurity
Deepfakes may be growing in sophistication and appear to be a daunting threat. However, by integrating deepfakes into the company’s cybersecurity and risk management, CISOs - with the assistance of General Counsel, the CEO, and Chief Risk Officers (CRO) – can help their companies stay one step ahead of malicious actors. This calls for a broad understanding across the organization of the risks of deepfakes, and the need for an appropriate budget to combat this threat. A combination of detection technology and processes, a cross-functional approach (involving the CISO’s team, Legal, PR and other functions), and well-informed employees, should enable cybersecurity professionals to spot potential and actual attacks, and act fast to limit the damage.
Remember, the same technology that is being used to infiltrate an organization can also protect it. Collaborating with deepfake cybersecurity specialists helps spread knowledge and continually test and improve controls and defenses, to avoid fraud, data loss and reputational damage.
How KPMG can help
Rooted in our Trusted AI framework KPMG firms can support organizations with tailored technology solutions designed to combat deepfake challenges, leveraging our securing AI framework and experience in risk-related transformation and risk intelligence. We adopt a holistic, zero-trust approach to cybersecurity in engagements, treating all identities, devices, networks, and data as untrusted. This means granting least privilege access to users, rigorous identification, authentication and verification, and continuous monitoring. KPMG firms offer services like susceptibility studies, attack simulations, process re-engineering, detection platform implementation, and culture, training and awareness programs, amongst others.