scorecardresearch
Clear all
Search

COMPANIES

No Data Found

NEWS

No Data Found
Sign in Subscribe
OpenAI reportedly lobbied EU to avoid harsher AI regulations

OpenAI reportedly lobbied EU to avoid harsher AI regulations

The EU AI Act still has a way to go before it becomes law. The legislation will undergo further discussion within the European Council, including how and where the act will be applied

OpenAI CEO Sam Altman OpenAI CEO Sam Altman

OpenAI, the creator of ChatGPT, has been engaged in lobbying efforts aimed at influencing the European Union's forthcoming AI legislation. According to documents obtained from the European Commission by Time, OpenAI sought several amendments to a draft version of the EU AI Act before its approval by the European Parliament on June 14th. Some of the proposed changes put forth by OpenAI were eventually incorporated into the legislation.

One of the key debates prior to the approval of the AI Act centred around the inclusion of all general-purpose AI systems, including OpenAI's ChatGPT and DALL-E, under the ‘high risk’ category. This classification would subject these systems to the most stringent safety and transparency requirements outlined in the act. OpenAI, along with other tech giants like Google and Microsoft, opposed this designation, arguing that only AI systems explicitly applied to high-risk use cases should be subject to such regulations.

OpenAI, in an unpublished white paper shared with EU Commission and Council officials in September 2022, emphasised the deployment of general-purpose AI systems like GPT-3 for a wide range of language-related tasks. The company acknowledged that while GPT-3 itself may not be a high-risk system, it possesses capabilities that could potentially be employed in high-risk use cases.

Also watch: Top smartphones under Rs 30,000 in India: Samsung Galaxy F54 5G, Realme 11 Pro 5G, and more

In June 2022, three representatives from OpenAI met with officials from the European Commission to discuss the risk categorisations proposed in the AI Act. An official record of the meeting obtained by Time revealed that OpenAI expressed concern about general-purpose AI systems being labelled as high-risk. The company feared that the broad inclusion of such systems under the high-risk category would lead to excessive regulation and hinder AI innovation. OpenAI, however, did not provide specific regulatory suggestions during the meeting.

An OpenAI spokesperson to Time that the company provided an overview of its approach to deploying systems like GPT-3 safely in response to policymakers' requests in the EU. The spokesperson further mentioned that OpenAI continues to engage with policymakers and supports the goal of ensuring the safe development and use of AI tools.

ChatGPT-maker's lobbying efforts in the EU had not been publicly disclosed until now, but they appear to have been largely successful. The final draft of the EU AI Act, approved on June 14th, does not automatically classify GPAIs as high-risk. However, the legislation does impose increased transparency requirements on "foundation models" like ChatGPT. Companies using these powerful AI systems will need to conduct risk assessments and disclose if copyrighted material has been used to train their models.

OpenAI supported the inclusion of 'foundation models' as a separate category within the AI Act, despite the company's secrecy regarding the sources of data used to train its AI models. It is widely believed that these models are trained on data scraped from the internet, including intellectual property and copyrighted materials. OpenAI argues that maintaining confidentiality about data sources is necessary to protect its work from being copied by competitors. However, if compelled to disclose such information, OpenAI and other tech companies could face copyright lawsuits.

Also Read ‘I am a fan...,’ says Elon Musk after meeting PM Modi

OpenAI CEO Sam Altman's position on AI regulation has been somewhat inconsistent. While he has advocated for regulation and expressed concerns about the potential dangers of AI, particularly in a joint open letter signed by prominent tech leaders, including Elon Musk and Steve Wozniak, Altman's focus has primarily been on addressing future harms. He has also hinted at the possibility of OpenAI discontinuing operations in the EU if the company is unable to comply with the region's incoming AI regulations, although he later retracted those statements.

The company's white paper sent to the EU Commission touted the company's approach to mitigating risks associated with GPAIs as industry-leading. However, critics, such as Daniel Leufer from Access Now, have found OpenAI's stance confusing. Leufer pointed out that while OpenAI urges politicians to regulate the company, it opposes setting those same regulatory measures as a minimum requirement.

The EU AI Act still has a way to go before it becomes law. The legislation will undergo further discussion within the European Council, including how and where the act will be applied. Final approval is expected by the end of this year, and it may take approximately two years for the legislation to come into effect.

Also Read

'Buying Netflix at $4 billion would've been better instead of...': Former Yahoo CEO Marissa Mayer

ChatGPT beats top investment funds in stock-picking experiment

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Published on: Jun 21, 2023, 4:07 PM IST
IN THIS STORY
×
Advertisement