'Completely ridiculous': Meta's chief AI scientist Yann LeCun dismisses Elon Musk's 'civilisation destruction' fear 

'Completely ridiculous': Meta's chief AI scientist Yann LeCun dismisses Elon Musk's 'civilisation destruction' fear 

In a recent podcast with venture capitalist Harry Stebbings, LeCun responded to Musk's views. Musk in many interviews has flagged the dangers of AI, especially after OpenAI rolled out ChatGPT.

Yann LeCun further emphasised that just because AI will be smarter than humans doesn't mean they will want to control humans.
Mukesh Adhikary
  • May 30, 2023,
  • Updated May 30, 2023, 10:22 AM IST

The chief AI scientist at Meta, Yann LeCun disagrees with Elon Musk that artificial intelligence poses an existential threat to the world. LeCun, a computer scientist, has been working in the field of artificial intelligence (AI) and machine learning for many years. He is an AI optimist who believes the world will benefit immensely from AI and does not think AI is necessarily dangerous. 

In a recent podcast with venture capitalist Harry Stebbings, LeCun responded to Musk's views. Musk in many interviews has flagged the dangers of AI, especially after OpenAI rolled out ChatGPT. More specifically, in an interview with popular US anchor Tucker Carlson in April, Musk said AI had the "potential of civilization destruction".

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk had told the ex-CNN host.

But LeCun, the AI guru at Facebook-parent company Meta, does not think Musk is correct. "Completely false. It makes an assumption which Elon and some other people may have become convinced of by reading Nick Bostrom's book 'Superintelligence' or reading you know some of Eliezer Yudkowsky's writing."

WHO ARE NICK BOSTROM AND ELIEZER YUDKOWSKY?

Nick Bostrom is a Swedish philosopher and professor at the University of Oxford. He is the founding director of the Future of Humanity Institute and the author of "Superintelligence: Paths, Dangers, Strategies." 

His expertise revolves around the realm of existential risks, the ethical dilemmas of artificial intelligence, and the consequences of futuristic technologies on human civilization. He is known for his insights into transhumanism and his exploration of the ramifications of cutting-edge artificial intelligence. 

Eliezer Yudkowsky is an American artificial intelligence researcher and writer. He is known for his work in the field of artificial intelligence alignment and for co-founding the Machine Intelligence Research Institute (MIRI, formerly known as the Singularity Institute for Artificial Intelligence). 

Yudkowsky is an advocate for safe and beneficial development of artificial general intelligence (AGI) and has written extensively on topics related to AGI, rationality, decision theory, and the future of humanity. He is also known for his involvement in the development of the concept of "Friendly AI" and for his online writings, including the popular rationality blog "LessWrong."

LeCun went on to explain the problem with Musk's theory. "This [the existential threat AI poses] is predicated on an assumption that is just false, which is the existence of 'hard take-off'. 

He explained 'hard take-off' as this theory that the minute you turn on a super-intelligent AI system, it's going to refine itself to be even more intelligent than humans and the world will be destroyed.     "That's completely ridiculous because there is no process in the real world that is exponential for very long. Those systems will have to recruit all the resources in the world. They would have to be given limitless power, agency," he said.

LeCun further emphasised that just because AI will be smarter than humans doesn't mean they will want to control humans.

"They [AI systems] have to be built so that they have a desire to take over. Systems are not going to take over just because they are intelligent. Even within the human species, it is not the most intelligent among us that want to dominate others," he said.

LeCun also looks forward to discuss the issue with the man touted as 'the Godfather of AI', Geoffrey Hinton, who recently quit Google citing similar existential threat that Musk had flagged. 

"We haven't spoken yet actually. We're going to speak to kind of get to know each other's opinion on it. I don't think he knows my opinion because I don't think he follows you know what I post on Twitter," LeCun said. 

He also shared that he was not surprised Hinton had quit Google. "The fact that he has left Google to be able to speak his mind is not surprising," he said.

"AI is such a complicated fast evolving issue that you basically need someone to be able to speak freely and I think Jeff didn't feel like he had that option at Google for various reasons. So, I understand why he wanted to leave. But I don't agree with him at all with the whole sort of you know probability of human extinction," LeCun said.

Also read: 'Everyone is a programmer now': Why NVIDIA CEO says AI has made everyone a coder

Also watch: OpenAI-backed start-up pips Elon Musk’s Tesla to deploy world's first AI-enabled humanoid robot

Also read: Elon Musk to visit China for the first time in three years

Read more!
RECOMMENDED