
OpenAI has unveiled a new safety framework for its advanced models. This move comes in the wake of growing concerns about the potential dangers of AI, which have been a focal point for both AI researchers and the general public since the launch of ChatGPT a year ago.
OpenAI, backed by Microsoft, has decided to deploy its latest technology only if it is deemed safe in specific areas such as cybersecurity and nuclear threats. As part of this initiative, the company is forming an advisory group to review safety reports and forward them to the company’s executives and board. While the executives will be responsible for making decisions, the board retains the right to reverse those decisions.
This announcement is particularly significant in light of the recent upheaval at OpenAI, where CEO Sam Altman was dismissed from his position, only to be reinstated a few days later. Sam Altman returned with a new board of directors. His sudden dismissal also led to speculations about potential threats from the developments at OpenAI.
Dangers of AI
The potential risks posed by AI have led to calls for caution in the industry. In April, a group of AI industry leaders and experts signed an open letter advocating for a six-month hiatus in the development of systems more powerful than OpenAI’s GPT-4. However, recent leaks have suggested that OpenAI has been either been testing the new GPT 4.5 or mistakenly published some changes in the description of the website. Some users on X even pointed out that they have experienced better responses from ChatGPT in the past week.
Also read: OpenAI to hold developer gathering in Bengaluru in January to 'address safety challenges'
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine