OpenAI blocks AI misinformation campaigns on Indian elections, Israel-Gaza conflict and more

OpenAI blocks AI misinformation campaigns on Indian elections, Israel-Gaza conflict and more

OpenAI identified threat actors with ties to Russia, China, Iran, and Israel participating in these campaigns.

OpenAI
Pranav Dixit
  • May 31, 2024,
  • Updated May 31, 2024, 9:36 AM IST

OpenAI, the artificial intelligence company led by Sam Altman, announced on Thursday that it had successfully disrupted five separate campaigns using its AI models for "deceptive activity" online. The campaigns, active over the past three months, involved the generation of fake social media profiles, short comments, and lengthy articles in various languages, all designed to spread misinformation and manipulate public opinion.

Related Articles

OpenAI identified threat actors with ties to Russia, China, Iran, and Israel participating in these campaigns. The targeted topics were diverse, ranging from the ongoing conflict in Ukraine and the recent war in Gaza to the Indian elections and political landscapes in Europe and the United States.

"These deceptive operations were a clear attempt to manipulate public opinion and influence political outcomes," OpenAI declared in a statement.

This revelation comes amidst growing concerns surrounding the potential misuse of generative AI technology, which has the capability to produce alarmingly human-like text, images, and audio. OpenAI, backed by tech giant Microsoft, has been proactive in addressing these concerns. Earlier this week, the company announced the formation of a dedicated Safety and Security Committee, spearheaded by CEO Sam Altman and other board members, to oversee the training of its next-generation AI models.

Fortunately, OpenAI confirmed that the disrupted campaigns did not achieve significant reach or audience engagement due to the company's countermeasures. The campaigns employed a combination of AI-generated content alongside manually written text and repurposed internet memes.

This incident coincides with a similar discovery by Meta Platforms, the parent company of Facebook and Instagram. In its quarterly security report released Wednesday, Meta revealed the identification of "likely AI-generated" content being used deceptively on its platforms. This included comments praising Israel's actions during the recent Gaza conflict, strategically placed under posts from prominent news outlets and US lawmakers.

Read more!
RECOMMENDED