scorecardresearch
Clear all
Search

COMPANIES

No Data Found

NEWS

No Data Found
Sign in Subscribe
Email used to log in for ChatGPT at potential risk, research team unveils OpenAI's GPT-3.5 Turbo threat

Email used to log in for ChatGPT at potential risk, research team unveils OpenAI's GPT-3.5 Turbo threat

The investigation revealed that last month, Zhu utilised the model to contact individuals, including personnel from The New York Times, using email addresses obtained from the AI.

Sam Altman was sacked as the CEO of OpenAI Sam Altman was sacked as the CEO of OpenAI
SUMMARY
  • The experiment reveals AI's potential to disclose sensitive information
  • A researcher managed to extract personal data using GPT-3.5 Turbo
  • OpenAI has responded to these concerns, stressing their dedication to safety and their stance against requests for private data

A study led by Rui Zhu, a PhD candidate at Indiana University Bloomington, has uncovered a potential privacy threat linked to OpenAI's powerful language model, GPT-3.5 Turbo. The investigation revealed that last month, Zhu utilised the model to contact individuals, including personnel from The New York Times, using email addresses obtained from the AI.

This experiment exploited GPT-3.5 Turbo's capability to recall personal data, circumventing its typical privacy safeguards. While not flawless, the model accurately provided work addresses for 80 percent of the Times employees tested. This revelation has raised concerns about the potential for AI tools like ChatGPT to disclose sensitive information with minimal adjustments.

OpenAI's suite of language models, encompassing GPT-3.5 Turbo and GPT-4, are designed for continual learning from new information. The researchers leveraged the model's fine-tuning interface, originally intended for users to enhance its knowledge in specific domains, to manipulate the tool's security measures. Requests that would typically be declined via the standard interface were approved using this method.

Despite employing various techniques to thwart requests for personal information, OpenAI, Meta, and Google have faced researchers finding ways to circumvent these safeguards. Zhu and colleagues eschewed the standard interface, opting for the model's API, and utilised a process termed fine-tuning to achieve their results.

OpenAI responded to these concerns, stressing their dedication to safety and their stance against requests for private data. However, experts express scepticism, highlighting the dearth of transparency surrounding the model's specific training data and the potential dangers associated with AI models harbouring private information.

The vulnerability exposed in GPT-3.5 Turbo raises wider apprehensions about privacy within extensive language models. Experts argue that commercially available models lack robust protections to safeguard privacy, posing substantial risks as these models continuously assimilate diverse data sources. The opaque nature of OpenAI's training data practices compounds the issue, prompting critics to advocate for heightened transparency and measures to ensure the protection of sensitive information in AI models.

Also Read ‘Better ChatGPT, AGI, woke control’: OpenAI CEO Sam Altman reveals top user requests for 2024

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Published on: Dec 26, 2023, 3:54 PM IST
IN THIS STORY
×
Advertisement