For security and risk management professionals, the growth of AI-generated deepfakes is a significant concern. By 2026, analysts predict that almost a third of enterprises will find traditional identity verification methods inadequate due to the influence of deepfake technology.  

Deepfakes are hyper-realistic digital forgeries created using artificial intelligence (AI) and machine learning (ML) techniques. By synthesizing human images and voices, this technology can produce videos or audio recordings that appear to be genuine. Deepfakes can allow attackers to steal someone’s identity, or make it appear as though they’ve done things they’ve never done or said things they’ve never said. 

The term “deepfake” entered the mainstream in December 2017, after an anonymous Reddit user calling himself “Deepfakes” started superposing celebrities’ faces onto pornographic videos. Since then the same techniques have been put towards a number of different ends, ranging from light-hearted comedy, to more serious and dangerous attempts at political sabotage, scams, bullying, blackmail, and more. Of course, photo manipulation is nearly as old as photography itself, but AI tools have made it far easier to generate near-photorealistic fakes. 

The Underlying Technology 

There are two main types of deepfakes: face swapping and face manipulation. The first consists of swapping the face of one person with another one. The second creates a new facial image entirely.  

Here’s how it works. In the case of visual deepfakes you need data like photos, or videos to train the model. Deepfake technology leverages two main AI / ML techniques: Generative Adversarial Networks (GANs) and deep neural networks (DNNs). GANs involve two neural networks – the generator that creates images or videos, and a discriminator that evaluates their authenticity – working in tandem to create and refine fake outputs until they are indistinguishable from real data. DNNs, meanwhile, analyse vast amounts of audio, video, or images to learn how to replicate human characteristics accurately. The process requires a lot of GPU horsepower, memory, and time. The techniques may vary, as do the quality of the results, but the latter is improving, and fast.  

Risks In The Wrong Hands 

While deepfakes can be fun to experiment with, malicious actors have quickly latched on to them as a means to mislead and defraud their victims.  

In the IT world, you may have heard of injection attacks. Their purpose is usually to sneak code or data into a system to access data or execute malicious commands. A new type can now be added to the list: the digital (or camera) injection attack. Scammers can inject deepfake images, biometrics, and other data to fool identity recognition software and convince the system that it received a trustworthy input.  

For example, an attacker could circumvent a camera and upload files in place of live captures. They could upload deepfake photos or documents on platforms that require such information, effectively using forgeries to bypass know your customer (KYC) measures. These examples show how deepfakes can pose new security threats, helping attackers to overcome biometric systems and gain unauthorised access.  

Deepfakes also raise legal and ethical concerns, with the potential to commit defamation and fraud, inflict emotional distress and reputational damage, and raise chilling questions about consent. Just imagine the impact on a person whose likeness was used in an unflattering context. The financial impact of such attacks can be massive, such as in the case of a multinational firm in Hong Kong where an employee was tricked into paying over $25 million through a video call with what he believed were his colleagues, but which were actually deepfake recreations. 

Spreading false information, distorting truths, and weaponising appearances can also have political impacts. In Slovakia, just days before the 2023 presidential elections, a faked audio recording spread online in which one of the elections top candidates, Progressive Slovakia’s leader Michal Šimečka, appeared to discuss rigging the election and joke about child pornography. Although the recording was debunked by fact-checkers, questions were raised about to what extent it could have played a part in Progressive Slovakia coming second in the election. 

What Can Be Done? 

Although convincing at first glance, there are telltale signs you can look out for to spot a deepfake: 

  • Lack of blinking or unnatural movement, no light reflection in the eyes 
  • Discontinuity in limbs, especially fingers 
  • Double chins, double eyebrows, or double face edges 
  • Changes in the background and lighting  
  • Lack of consistency between the scene and the subject 
  • Any discontinuity or change that appears unnatural 

As the prevalence of AI-generated deepfakes grows, businesses must remain vigilant about the cybersecurity risks these technologies pose.  

From an enterprise cybersecurity perspective, the use of deepfakes to aid phishing and social engineering attacks is particularly concerning. AI can craft highly realistic and personalized messages that are difficult to distinguish from legitimate communications, which can lead to attackers gaining unauthorized access to sensitive information, committing financial fraud, and inflicting significant damage on an organization’s reputation. Moreover, AI-enhanced malware can evade traditional detection methods, making it easier for attackers to exploit vulnerabilities in systems and networks. 

To counter these threats, businesses need to adopt advanced cybersecurity measures. AI-powered threat detection systems can monitor and respond to unusual patterns in real-time, enhancing an organization’s ability to identify and mitigate potential attacks. Implementing multi-factor authentication (MFA) and adopting a Zero Trust architecture ensures continuous validation of all users, adding layers of security to access points. Regular employee training and awareness programs are crucial to educating staff about the latest phishing techniques and social engineering tactics. Simulated attacks can help reinforce these lessons and test readiness. 

Data protection measures, such as encryption and Data Loss Prevention (DLP) technologies, are essential for safeguarding sensitive information. Organizations should also develop comprehensive incident response and recovery plans, conduct regular tests, and engage in industry collaboration to share threat intelligence and best practices. By fostering a culture of security awareness and employing secure development practices, businesses can better protect their assets and reputation against the misuse of AI and other advanced cyber threats. 

Over time, as technology advances, identifying manipulated images will become increasingly difficult. Developing systems capable of detecting fake content is crucial. Identity verification systems, for example, could use cryptographic signatures to establish a chain of trust between the device, the operating system, and the software, blocking digitally altered data. Other tools could analyze video material for irregularities, check the authenticity of audio against known voice patterns, and use blockchain to verify the source of information. Until such tools emerge, the best advice is to remain skeptical of the information encountered online and always verify before sharing. 

Bookmark