OpenAI’s much-anticipated Media Manager tool, announced in May 2024 as a way for creators to protect their content from being used in AI training, has yet to materialise. Recent reports suggest that the development and release of the tool are no longer a priority for the company, casting doubt on its future.
The Media Manager was envisioned as a machine learning tool to help creators identify and exclude their copyrighted text, images, audio, and videos from OpenAI’s training datasets. By automating content detection and reflecting creators’ preferences, the tool aimed to streamline the copyright management process and alleviate the burden on individuals to manually opt-out.
OpenAI currently offers a form-based process for creators to request the removal of their copyrighted material from training data, but it requires extensive manual input, including listing and describing individual pieces of content. The Media Manager was designed to simplify this process by automatically detecting content across websites and cross-checking it against opt-out lists.
However, according to a TechCrunch report, the tool has seen little progress since its announcement seven months ago. Sources familiar with OpenAI’s operations revealed that the company does not consider the tool a priority, and no active work is currently being done on it.
Adding to the uncertainty, Fred von Lohmann, a member of OpenAI’s legal team involved with the tool, transitioned to a part-time consulting role in October 2024. This change, along with the lack of updates on Media Manager’s development, suggests that the project is not on OpenAI’s short-term roadmap.
Experts and creators have voiced concerns over the Media Manager’s potential effectiveness. Large platforms like YouTube and TikTok, which have invested heavily in content identification, still face challenges managing copyrighted material at scale. Critics argue that OpenAI’s reliance on creators to opt out, rather than establishing a default opt-in system, shifts the responsibility unfairly onto individuals who may not even know such a tool exists.
The delay in launching the Media Manager comes at a time when OpenAI faces increasing scrutiny over its use of copyrighted material to train its large language models (LLMs). While OpenAI defends its practices under the “fair use” doctrine, the lack of robust tools for creators to manage their intellectual property has drawn criticism from artists, writers, and organisations.
OpenAI’s broader efforts to evolve as a Public Benefit Corporation (PBC) and address concerns over AI’s ethical and societal impacts may also explain the deprioritisation of the Media Manager tool.