Deepfake threats to companies

Five practical steps to address a fast-growing phenomenon
girl looking at computer screen gradient

Deepfakes – synthetic media where a person’s likeness is swapped for another – are not a new concept. Photographic and film manipulation, telephone voice decoders, and image adjustment have been with us for some time. However, the rise of generative artificial intelligence (GenAI) has moved everything much further forward, making tools available at low/no cost, enabling almost anyone with a smartphone and basic understanding to create AI-generated synthetic media. In the hands of bad actors, these technologies can increasingly be used to make deepfakes for fraud, deception, and defamation.  

Chief Executive Officers (CEOs), General Counsels and the heads of Information Security, Fraud, Risk, Marketing, all have good reason to be concerned about the rapid spread of deepfake technology. In this article, KPMG professionals seek to examine some of the potential threats of deepfakes and discuss how organizations can shore up their defenses to try to reduce the risks to their business.  

The rise of deepfakes exploits our natural tendency to trust visual and auditory content, posing significant risks. It's critical for organizations to enforce robust Trusted AI programs to help ensure safe and ethical AI use and secure deployment. Implementing a zero-trust architecture is equally essential, embedding verification across all operations with stringent controls and cutting-edge technology. This strategic approach is vital for maintaining integrity and trust in the digital age."

Bryan McGowan

Global Trusted AI Lead, KPMG International

Child’s play or malicious weapon? The democratization of deepfake technology

Virtually anyone can make a deepfake, enabled by the accelerating democratization of GenAI, driven by open-source tools, user-friendly platforms, and decentralized access to deepfake technology. This can make it simple and low-cost (or potentially even free) to create audio, video, and text via advanced GenAI techniques, mostly based on large language models (LLM) that consume and classify content and reproduce it in new ways. It's already possible to go online and learn how to make a convincing deepfake, based on a mere three seconds of recorded audio of someone’s voice – using off-the-shelf, publicly available software. On top of this, there is an emergence of "deepfake-as-a-service" as a lucrative market on the dark web.1

Attack surfaces are increasing, in part due to the hybrid work environment where many people are connecting with organizations remotely from homes, coffee shops, airports, gyms and other locations. The use of biometric data for authentication and authorization presents further opportunities to infiltrate organizations, via synthetic voices and images. According to one recent global study, 70 percent of people say they’re not confident they can identify a real versus a cloned voice.2

This gets to the essence of the deepfake threat, which is the human propensity for trust. As a visual and communicative species, people often authenticate, and trust based on what can be seen and heard. But the knowledge that deepfakes are out there could seriously erode trust and necessitate new ways to verify authenticity.

The deepfake threat spectrum

Criminals or other malicious actors can use deepfakes in a number of ways that are potentially damaging, amplifying the costs of fraud, regulatory fines, and data breaches, and eroding trust in brand integrity: 

  • Fraudulent financial transactions

    Cybercriminals could use deepfakes to impersonate senior executives during phone calls or video conferences (sometimes called “vishing”), convincing others that they carry authority. They could then acquire confidential information or even persuade individuals to transfer significant funds. Insurance companies could be targeted with deepfake generated images submitted with claims. With more companies moving toward automated claims processes, removing the human claims adjuster, such images may not come under as much scrutiny. And customers can be targeted by deepfakes posing as official company representatives and surrender personal financial details or even make payments to criminals.

  • Disinformation

    Deepfakes videos or audio recordings can spread fraudulent or false and defamatory information about individuals and organizations, which could damage stakeholder, customer, and wider public trust. In a social media age, such content can go viral in seconds. For instance, by circulating deepfakes of executives announcing a company’s financial status, upcoming mergers, or product launches and marketing materials – or making derogatory remarks, or inaccurate political statements – criminals could profit from subsequent fluctuations in share prices, as well harming a company’s reputation. Such tactics could also be used by competitors to cause stock price volatility and deter investors – as well as by other nation states to undermine the economy. Similarly, malicious actors may try to damage companies’ reputation by spreading deepfakes about environmental harm, poor labor practices, faulty, dangerous products, or inappropriate behavior from executives. 

  • Enhanced social engineering attacks

    By using deepfakes, bad actors can penetrate organizations by, for example, impersonating a Chief Technology Officer (CTO) to persuade staff to grant access to a core technology system – to steal confidential information or plant malware. This might be achieved through targeted “spear phishing” emails with a deepfake video attached.

  • Other deepfake risks

    Many companies are also vulnerable to extortion from AI fabricated incriminating content, brand misuse, potentially leading to legal liabilities, fines, loss of trust and business. Remote hiring practices could open the door for either criminals or under-qualified candidates, using deepfakes to give synthetic identities a convincing face and voice – even going so far as to conduct interviews. 

Five practical steps which can help protect against deepfakes

Deepfakes are a major concern and organizations should take appropriate steps to try to protect themselves. But rather than reinvent the wheel, CISOs and CROs should integrate this risk into their organizational cybersecurity strategy, by understanding the threat, the likelihood and impact, and establishing preventive measures and defenses. 

To better understand their exposure to deepfake attacks, companies should undertake a broad-ranging ongoing susceptibility assessment of their processes. This involves identifying processes that rely on the ingestion of media (such as automated insurance claims), or picture/video/voice for authorization, and determining the potential impacts of a deepfake attack. Armed with this knowledge, they can then design processes to evaluate these media – either in real time or after a deepfake attack.

Regular audits of digital assets can spot potential misuse – in the same way that companies monitor use of trademarks and patents. However, the pace of deepfake developments and attacks can make it hard to keep up with threats on external-facing platforms, social media and the dark web. Companies should consider working with service providers that specialize in deepfake research and can more effectively monitor fraudulent content. Audits and monitoring should extend to third parties, and supply chain as part of the organization’s vendor risk management. Disinformation is harder to track, as there are so many potential sources of attacks, but, again, it’s important to keep a pulse on activity that could prove damaging. 

GenAI may be enabling bad actors, but it is also a vital tool in the quest to detect deepfakes. Over time, organizations could reduce the need for human recognition as deepfake analysis and detection platforms become more common, and processes and architectures are redesigned to incorporate them. There is a growing range of technology options, such as predictive algorithms and anomaly detection to pre-empt deepfake-related attacks. These technologies should support a better defense – rather than just identifying breaches. Adversarial machine learning can train models to not only detect deepfakes, but also better understand possible attacks, pinpointing potential vulnerabilities. Companies can expect to see more collaborative innovations in deepfake detection. To be truly proactive, these detective platforms should be integrated into organizational processes that ingest media and can therefore detect deepfakes before they have any impact. Strengthening identity and access security and controls can make it more difficult for deepfakes to penetrate organizations. Emerging protocols include multi-factor authentication (MFA), out-of-band authentication (secondary verification through a separate communication channel), and behavioral biometrics (identifying people by how they behave). There are also AI-driven solutions that look at potential anomalies in identity and access management.

Digital watermarks act as markers in audio, video or image data, and can identify ownership of the copyright. Liveness detection software can confirm whether the person is real and physically present in front of the camera. And there is considerable potential for blockchain, using immutable content authentication to counter manipulation.

All of this should be part of a zero-trust architecture model based on three key principles: assume nothing, check everything and limit access.

Recognizing deepfakes is going to get harder over time, but people are likely to remain the front-line defense for the time being. Nevertheless, it’s essential to provide regular, scenario-based training for employees, leadership, the board, suppliers and, where possible, customers, to enable them to recognize and respond to deepfake-related threats. Any suspected deepfake content should be reported to IT, with users informed via alternate communication channels.

Deepfake attack simulations can be incorporated into security testing, including annual red teaming – where security professionals carry out an actual simulated attack on a target network – and periodic penetration testing by authorized third parties attempting to hack into the system.

Companies should also publicly publish content, communications and media policies. These state what kind of content can be shared, in what format, and the appropriate communication channels (internal and external).  

Given the relative nascence, fast development and use of deepfakes by criminals to perpetrate fraud and other harms, national and global regulations are only gradually emerging. It’s vital to continually monitor regulatory developments and integrate these into national and international operations. 

There should be clear guidelines on the use of AI and associated deepfake concepts within the organization, including approved AI tools. For example, these might be used to create training videos without using actors, chatbots for customers, or internal helplines. Compliance measures should be embedded into cyber risk management strategies. Executive passcodes enable senior management to confirm that the person they’re communicating with is genuine and not a deepfake. Safety protocols should include both a ‘safe’ passcode and a ‘duress’ passcode (a covert distress signal to warn others that they are being forced to do something against their will). 

Deepfake prevention as an integral part of cybersecurity

Deepfakes may be growing in sophistication and appear to be a daunting threat. However, by integrating deepfakes into the company’s cybersecurity and risk management, CISOs  - with the assistance of General Counsel, the CEO, and Chief Risk Officers (CRO) – can help their companies stay one step ahead of malicious actors. This calls for a broad understanding across the organization of the risks of deepfakes, and the need for an appropriate budget to combat this threat. A combination of detection technology and processes, a cross-functional approach (involving the CISO’s team, Legal, PR and other functions), and well-informed employees, should enable cybersecurity professionals to spot potential and actual attacks, and act fast to limit the damage.

Remember, the same technology that is being used to infiltrate an organization can also protect it. Collaborating with deepfake cybersecurity specialists helps spread knowledge and continually test and improve controls and defenses, to avoid fraud, data loss and reputational damage.

How KPMG can help

Rooted in our Trusted AI framework KPMG firms can support organizations with tailored technology solutions designed to combat deepfake challenges, leveraging our securing AI framework and experience in risk-related transformation and risk intelligence. We adopt a holistic, zero-trust approach to cybersecurity in engagements, treating all identities, devices, networks, and data as untrusted. This means granting least privilege access to users, rigorous identification, authentication and verification, and continuous monitoring.  KPMG firms offer services like susceptibility studies, attack simulations, process re-engineering, detection platform implementation, and culture, training and awareness programs, amongst others.

Related content

Cyber Security Services

Cyber security is more than a technology issue – it’s a golden thread that runs throughout your business, enabling it to operate effectively, efficiently, and securely. Our Cyber experts can help you to protect your future.

Trusted AI services

Accelerating the value of AI with confidence.

Regulatory and Risk Advisory

Navigate the complexities of the regulatory landscape and mitigate risks with KPMG professionals' guidance and innovative digital solutions. Our approach helps ensure compliance with evolving regulations and effectively manages potential threats, enhancing your organization’s resilience and striving to safeguard against disruptions.

Our people

Bryan McGowan

Global and US Trusted AI Leader

KPMG in the U.S.

Alexander Geschonneck

Partner, Global Forensic Leader

KPMG in Germany

Vivek Jassal

Partner, Cybersecurity

KPMG in Canada

Katie Boswell

Managing Director, US Securing AI Leader

KPMG in the U.S.


1 “Tencent Cloud announces Deepfakes-as-a-Service for $145”, The Register, April 28, 2023.

2 “Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam”, McAfee, May 15, 2023.


Connect with us

KPMG combines our multi-disciplinary approach with deep, practical industry knowledge to help clients meet challenges and respond to opportunities. Connect with our team to start the conversation.

Two colleagues having a chat