Some of the former employees from OpenAI, Anthropic and Google’s DeepMind have warned that AI poses great risks to humanity in an open letter. The letter calls on corporations to commit to more transparency and foster a culture of criticism that holds them accountable. The letter, signed by 13 people read, “AI companies possess substantial non-public information about the capabilities and limitations of their systems. We do not think they can all be relied upon to share it voluntarily.”
They say that artificial intelligence can increase misinformation, solidify existing inequalities and allow weapons system to become autonomous that can even lead to “human extinction”. The letter revealed that AI companies and governments across the world have already acknowledged these risks. They stated that these risks could be mitigated with proper guidance from policymakers, scientific community and the public.
The open letter further emphasised that AI companies have secret information about the capabilities and limitations of their systems that they are not sharing voluntarily. It stated, “AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.
They noted that because of no government oversight of these corporations, board confidentiality agreements block them from talking about these concerns publicly. It read, “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.”
This letter is endorsed by Yoshua Bengio, computer scientist, popular for his work on artificial neural networks and deep learning, Geoffrey Hinton, Godfather of AI and Stuart Russell, British Computer Scientist.
In an exclusive conversation with Tech Today last year, Russell had stated that the deployment of AI models is a recipe for disaster, with potential consequences ranging from the spread of misinformation to encouraging individuals to harm themselves or others. He said, “With the power of AI growing every day, it's crucial that we take a step back and ensure that these powerful technologies are developed and deployed in a responsible and safe manner.”
Also Read:
MSI goes all-in on AI with new gaming, business, and creator laptops at COMPUTEX 2024