How Cybercriminals Are Using Deepfake Technology for Fraud

How Cybercriminals Are Using Deepfake Technology for Fraud

Deepfakes are the latest technology in the arsenal of cybercriminals to wage blackmail and reputational attacks by putting their targets in possible legal or financial jeopardy and to bypass KYC checks to gain access to restricted services by impersonating the real executives.

Deepfake technology has been employed by criminals to hijack the appearance and voice of real people, bypassing security controls and committing account takeovers, financial fraud schemes, etc.

1. Voice Cloning


Imagine receiving a call from someone you trust, asking for personal data or suggesting financial transactions, sounding just like that person normally does. Unfortunately, impersonation scams have been on the rise since bad actors have been getting their hands on more powerful tools for generating deepfakes. When combined with weak authentication and detection procedures, the result can be harmful attacks with damaging consequences for both financial security and corporate reputation.

Neural network-powered models can mimic the subtleties, intonations, and distinctive features of an individual's voice with startling accuracy - perfect for recreating real voices of celebrities, people in authority figures, or everyday people alike. When used for good, this technology can provide benefits like customized digital assistants, voiceovers in multiple languages, speech restoration after surgery; however, criminals have increasingly turned to voice cloning to impersonate colleagues or trusted individuals and defraud businesses of millions in illicit funds.

This technique lets criminals manipulate video and audio recordings, striking deepfakes high enough in the verification continuum to fool systems employing facial recognition and liveness detection technology. For example, the CEO of a UK-based energy firm used his cloned voice in a revelatory scam that saw employees transferring a total of $243,000 to a Hungarian firm. The frequency of these attacks is escalating as shown in the map below.

2. Video cloning


Deepfakes utilize generative AI models to morph images, audio, and video files into synthetic media created artificially using these two forms. Such media creation has procedures beyond cyber-crimes; examples include animation and 3-D modeling by doctors for medical models whereas engineers use this technology for industrial designs.

Cybercriminals have been on the flip side of this technology producing blatant propaganda videos. In a typical attack, the criminals create the image of a person for impersonating him/her in a video call, mostly for corporate espionage and fraud schemes asking for funds transfers or obtaining confidential information among employees impersonating finance team members.

An employee in Hong Kong recently lost $25 million to an impostor described as a CFO of an enterprise. To make the attack appear that much more convincing, the attackers went ahead and created a deep fake with the image of the target and lip-synching technology, mimicking their voice.

Deepfake attacks may include the sending of compromising images and videos or the use of deepfake technology to create counter-identity identities to bypass traditional identity verification technology.

3. Image cloning


Cybercriminals are deepfake-image cloning - including photos and videos. This helps him disguise their identity during fraud attempts such as phishing and social engineering scams; document forgery; mimic facial features using deepfake models (including nose contour, lips, and jawline) and also mimic body language to impersonate someone in video calls or virtual Zoom meetings; fake documents are yet another functionality that deep fake technology serves.

On the other hand, the more visible instances of deepfake technology such as recreating the late Luke Skywalker from Star Wars or making an Anthony Bourdain film that includes him speaking posthumously are far sinister. The smugglers use it to access sensitive data from their victims by bypassing security and stealing personal identification information (PII).

For instance, one finance worker working in a multinational corporation in Hong Kong was duped into wiring more than $25 million to the fraudsters who stole his money while he was watching what could have accurately been a deepfake video showing him asking for money from a command from his boss, pretending it was actually a company director asking for money. Similarly, deepfake tools were used to implicate Ukrainian President Volodymyr Zelenskyy in an appeal addressed to military forces to surrender against invading Russian forces.

Despite governments and tech companies working to develop tools capable of detecting deepfakes, the latter have begun rapidly evolving, thus remaining mostly unreliable, as researchers have determined that deepfakes still fail to capture when someone is blinking. 

4. Text cloning


Cybercriminals can use deep learning techniques to impersonate individuals to trick others into divulging sensitive data over the phone or video conference. Their attacks appear more convincing to unwary targets; thus posing an unprecedented threat to businesses' internal communications, employee safety and brand integrity. Phishing attacks meaning, represents a serious danger.

This technology is used for blackmail, where cybercriminals create sexually explicit videos or images of the victim, that may cause embarrassment, mental trauma, and financial losses; these could be clones of extremism hackers impersonating someone close to blackmail victims.

The other potential harms that may be induced by this technology involve political or religious strife in between nations or putting pressure on election results supported by these false reports. Markets could shape up for a huge shock if market prices were manipulated using phony deepfakes of company executives.