scorecardresearch
Clear all
Search

COMPANIES

No Data Found

NEWS

No Data Found
Sign in Subscribe
'$35 million gone in one call': Deepfake fraud rings are fooling the world's smartest firms 

'$35 million gone in one call': Deepfake fraud rings are fooling the world's smartest firms 

Deepfake technology originally captured public attention as a source of amusement, with celebrity face-swapping applications. However, what started as a curiosity quickly became a huge cyber threat.

Cybersecurity professionals are working hard to create systems that detect deepfakes. Cybersecurity professionals are working hard to create systems that detect deepfakes.

In a digital age when artificial intelligence (AI) is quickly revolutionizing businesses, a dangerous side effect has emerged: deepfake frauds. Deepfakes, or hyper-realistic audiovisual material created by sophisticated machine learning algorithms, have become a powerful weapon for hackers. Deepfake schemes—ranging from impersonating CEOs for wire fraud to creating fake hostage videos for extortion—have ushered in a new era in cybercrime.

Evolution of deepfake scams

Deepfake technology originally captured public attention as a source of amusement, with celebrity face-swapping applications. However, what started as a curiosity quickly became a huge cyber threat. Cybercriminals use complex architectures such as Generative Adversarial Networks (GANs) to create misleading voice, video, and images with shocking accuracy.

According to a report in The Wall Street Journal, in 2019, a UK-based energy corporation was duped into transferring $243,000 by crooks impersonating the CEO's voice using deepfake technology, in one of the first corporate deepfake fraud incidents.

In a similar case, CNN reported that a deepfake audio tape appeared only days before a critical election in Slovakia, in which a key politician was purportedly heard talking about swaying the result. This modified information, manufactured and circulated by unscrupulous people, contained fake claims aimed at harming the candidate's reputation. The disinformation produced widespread confusion and distrust among voters, undermining the public's opinion of the election's integrity until fact-checkers intervened and refuted the bogus claims.

Taxonomy of deepfake scams
Corporate Espionage and Executive Impersonation: Malefactors use deepfake technology to mimic the appearance and voice of high-ranking bosses while providing misleading instructions to subordinates. In Hong Kong, a deepfake audio call effectively imitated a company's director, facilitating a $35 million robbery.

  • Simulated hostage scenarios: Cyber extortionists use deepfake videos of people appearing to be in distress to demand ransom payments from their relatives. In the United States, law enforcement discovered a group that exploited manufactured hostage films to defraud victims' relatives. Fox26 Houston stated that kidnapping scams are on the rise, with fraudsters increasingly using artificial intelligence to clone loved ones' voices or videos, making these scams even more convincing and emotionally manipulative.
  • Identity appropriation for financial fraud: Publicly available personal media is used to build synthetic identities. A high-profile case involved a social media influencer's likeness being utilized to obtain illicit loans totaling hundreds of thousands of dollars.
  • Sociopolitical manipulation and disinformation: Deepfakes have been used to destabilize political environments by spreading fake videos of public personalities making inflammatory statements. In 2020, such fabrications sparked widespread turmoil in various nations.
  • Fabricated celebrity scandals: Celebrities are popular targets for deepfake hoaxes. Misleading recordings presenting them in compromising positions have caused severe reputational injury and widespread confusion.

Factors contributing to the growth of deepfake fraud
Several interconnected variables have contributed to the increase in deepfake-driven criminal activity:

  • Technological accessibility: Open-source repositories now offer pre-trained deepfake models, lowering the technological barriers to entry for malevolent actors.
  • Abundant personal media: The widespread use of social media platforms provides a vast collection of photographs and videos for deepfake synthesis.
  • Cognitive vulnerabilities: Humans have a cognitive predisposition toward trusting audiovisual material, making them prone to deceit.

Research advancements and cybersecurity countermeasures

Cybersecurity professionals are working hard to create systems that detect deepfakes. These tools look for subtle signals that something isn't right, such as unusual eye movements, lips that don't match speech, or reflections that don't appear real. Big firms such as Microsoft and Google are collaborating with researchers to create databases of recognized deepfake material. This allows the tools to better spot fresh fakes.

Laws are also being amended to address the issue. For example, the European Union has approved legislation mandating social media sites to detect and mark bogus information so that consumers can understand what they're viewing.

Simple ways to protect yourself

  • Double-check Suspicious Videos and Audio: If you receive an unusual message requesting money or personal information, try to verify it. Call the individual personally or use a well-known, reliable communication method.
  • Limit what you share online: The less personal media you make public, the more difficult it is for fraudsters to develop a deepfake.
  • Use multiple verification methods: If you receive an unexpected request, check it through more than one communication channel.

What's to come

As deepfake technology advances, it will become increasingly difficult to distinguish between genuine and fake. Researchers are developing technologies that may add invisible markings to real-world videos and images, making it simpler to detect alterations.

The fight against deepfake frauds is continuing, and remaining vigilant is critical. In a world where videos and pictures are easily altered, thinking critically about what we see and hear online will help keep us safe.

(Shamsvi Balooni Khan is based in Michigan, US, where she is a current Master of Science in Data Science student at Michigan State University.)

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Published on: Mar 27, 2025, 4:58 PM IST
×
Advertisement