In a significant policy shift, OpenAI, spearheaded by Sam Altman, has opened the door to applications of its AI technologies for military and warfare purposes. The alteration involves the removal of language explicitly prohibiting the deployment of OpenAI's technology for military uses from its usage policy
OpenAI justified the revision by aiming to establish a set of universal principles that are easy to remember and apply. A company spokesperson told Business Today, "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission."
"For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open-source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions," the spokesperson added.
While OpenAI has softened its stance on military use, it still maintains a ban on AI applications for weapons development. The balance between enabling military-related tasks and preventing weaponisation remains a focal point amid the evolving landscape of AI technology applications.
Also Read
‘I wish I could say ChatGPT’: OpenAI CEO Sam Altman reveals his most used app on Bill Gates’ podcast
I didn’t expect ChatGPT to get so good, says Bill Gates in a chat with OpenAI CEO Sam Altman