Deepfake AI videos are getting even better with TikTok owner ByteDance's OmniHuman-1 model

Deepfake AI videos are getting even better with TikTok owner ByteDance's OmniHuman-1 model

ByteDance researchers have demoed a new AI model called OmniHuman-1 that can generate deepfake videos just by using a single reference image and audio.

Outside the realm of financial institutions, deepfake scams are also targeting individuals directly.
Business Today Desk
  • Feb 05, 2025,
  • Updated Feb 05, 2025, 2:50 PM IST

ByteDance, the parent company of TikTok, has unveiled OmniHuman-1, an advanced deepfake AI capable of generating highly realistic videos from a single image and audio input. TechCrunch reports that OmniHuman-1 can create seamless animations, adjusting body proportions and even modifying existing videos with astonishing accuracy.

According to TechCrunch, ByteDance's model has been trained on 19,000 hours of video, and it isn't flawless in its outputs, often struggling with low-quality images and certain poses. Here are a few videos generated using the OmniHuman-1 model:

Related Articles

Here's a TED Talk that never happened.

ByteDance's model was even able to create a deepfake version of Albert Einstein's lecture.

Deepfake technology has advanced significantly, leading to increasingly realistic and accessible synthetic media. While innovations like ByteDance's OmniHuman-1 demonstrate the potential for creative applications, they also raise significant ethical and security concerns.

In South Korea, for instance, there has been a surge in deepfake pornography, prompting new laws criminalising the production, possession, and distribution of such content. However, enforcement remains challenging, and activists argue that deeper systemic issues, such as societal misogyny, need to be addressed to combat this problem effectively.

In the United Kingdom, Channel 4 faced criticism for potentially violating the Sexual Offences Act 2003 by broadcasting an AI-generated video of actress Scarlett Johansson without her consent. Legal experts suggest that nonconsensual sharing of such deepfake imagery could breach the law, highlighting the need for clearer regulations regarding AI-generated content.

Global Response and Regulation

In response to the growing threat of deepfakes, various regions are enacting regulations. The European Union, for example, approved the Artificial Intelligence Act in 2024 to update legal frameworks concerning AI, including provisions addressing deepfakes. However, detecting and prosecuting deepfake-related crimes remains complex, necessitating ongoing adaptation of legal systems to balance technological advancement with justice and integrity.

As deepfake technology continues to evolve, it is crucial for legal frameworks, detection methods, and public awareness to advance in tandem to mitigate the associated risks effectively.

Read more!
RECOMMENDED