
In anticipation of the upcoming elections in India and the United States, policymakers are grappling with the challenge of tackling deepfakes and AI-generated content. Addressing this pressing issue, Meta, the parent company of Facebook, Instagram, and Threads, unveiled a significant step in their strategy on Tuesday.
In the coming months, Meta intends to implement image labelling for content posted across its platforms that has been generated using artificial intelligence (AI). This move aims to provide users with greater transparency regarding the authenticity of the content they encounter online.
Nick Clegg, President of Global Affairs at Meta, underscored the importance of this initiative, stating, "We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so."
Furthermore, Meta plans to introduce a feature enabling users to voluntarily disclose when they share AI-generated video or audio. This disclosure will prompt Meta to add a visible label, alerting viewers to the artificial nature of the content.
Clegg highlighted the potential impact of such measures, explaining, "If the company determines that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, 'we may add a more prominent label if appropriate, so people have more information and context'."
With Meta's family of apps boasting a staggering 3.19 billion daily users, the implications of these measures are vast. The company emphasised its commitment to collaboration with industry partners to establish common technical standards for identifying AI-generated content, including video and audio.
"We’ve labelled photorealistic images created using Meta AI since it launched so that people know they are ‘Imagined with AI,’" Clegg remarked, emphasising the proactive approach Meta has taken in this regard.
Moreover, Meta is actively engaged in discussions with other industry players, such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, to develop comprehensive standards for identifying AI-generated content. Through forums like the Partnership on AI (PAI), Meta aims to ensure alignment with best practices in the field.
Clegg acknowledged the evolving nature of the debate surrounding AI-generated content, envisioning ongoing discussions on authentication methods for both synthetic and non-synthetic content. "These are early days for the spread of AI-generated content," he observed. "As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content."
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today