
In a legal battle unfolding in the US District Court in southern New York, OpenAI has moved to counter allegations put forth by The New York Times regarding copyright infringement. OpenAI contends that the renowned publication employed what it terms as "deceptive prompts" to manipulate ChatGPT, OpenAI's language model, into reproducing its content. The tech firm is now seeking the dismissal of multiple claims asserted by The Times in its copyright infringement lawsuit.
Central to OpenAI's defence is the assertion that The Times exploited a known bug within the system, an issue which OpenAI asserts it is actively addressing. The company further alleges that The Times directly fed articles into ChatGPT, prompting it to generate verbatim passages, a practice it argues is not typical usage of its products. OpenAI cites a specific Times article from April 2023 as evidence, titled "35 Ways Real People Are Using A.I. Right Now," aligning with arguments previously presented by the company in a public response earlier in January.
Responding to these claims to The Verge, Ian Crosby, lead counsel for The Times, refuted the characterisation of the publication's actions as a "hack," instead asserting that they were simply utilising OpenAI's products to investigate potential copyright infringements. Crosby highlighted that OpenAI does not contest the unauthorised reproduction of Times' works within the statute of limitations.
The legal dispute stems from The Times' lawsuit against OpenAI and Microsoft, filed in December, alleging that the companies trained their AI models using Times' content, enabling their chatbots to replicate stories verbatim. The lawsuit posits that this practice not only undermines The Times' revenue but also jeopardises its relationship with its audience.
In its defence, OpenAI seeks to dismiss several aspects of The Times' claims, including counts of direct copyright infringement occurring beyond a three-year statute of limitations, allegations of contribution to infringement, failure to remove infringing content, and claims of unfair competition through misappropriation. The lawsuit additionally alleges trademark dilution, common law unfair competition by misappropriation, and vicarious copyright infringement.
This legal confrontation represents just one instance in a broader landscape of litigation involving AI companies. Beyond OpenAI and Microsoft, startups such as Anthropic and Stability AI are also facing legal challenges, signalling a growing trend of legal action against players in the AI sector.
Co-founder and CEO of Copyleaks Alon Yamin told Business Today, “Given the expansion of generative AI among media, we can expect more lawsuits, similar to the one between OpenAI and The New York Times. However, no one should be surprised, as this is often true with most disruptive technologies. The argument regarding how these models are trained and what content was used will continue for a while because, as this technology expands and becomes widely utilised in more and more industries, so will the concern regarding the ethics surrounding AI and its development. That’s why organisations must proactively have safeguards in place with tools that help identify AI-generated content. Hence, they have the data needed to make informed decisions and avoid potential legal trouble later on.”
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine