Yann LeCun, Meta’s Chief AI Scientist and a pioneer in modern artificial intelligence, has declared that the true AI revolution is still on the horizon. Speaking at the 2024 K-Science and Technology Global Forum in Seoul, hosted by South Korea’s science ministry, LeCun emphasised the transformative potential of AI while cautioning against hasty regulations that could stifle innovation.
“The real AI revolution has not yet arrived,” LeCun stated during his opening speech, adding that AI is poised to redefine how humans interact with technology. “In the near future, every single one of our interactions with the digital world will be mediated by AI assistants,” he said, envisioning systems with intelligence on par with humans.
While acknowledging the advances brought by generative AI models like OpenAI’s ChatGPT and Meta’s Llama, LeCun highlighted their limitations. “LLMs can deal with language because it is simple and discrete, but it cannot deal with the complexity of the real world,” he explained. These systems lack the ability to reason, plan, and understand the physical world in the same way humans do.
To bridge these gaps, LeCun revealed Meta’s efforts to develop a new AI architecture capable of observing and learning from the physical world, similar to how babies interact with their environment. This objective-driven AI aims to build predictions and understand real-world complexities, paving the way for a more sophisticated generation of AI.
LeCun also championed the need for a collaborative, open-source AI ecosystem. He argued that AI models must be trained across diverse cultural contexts, languages, and value systems to be truly effective. “We can’t have a single entity somewhere on the West Coast of the United States train those models,” he said, advocating for global cooperation in AI development.
However, he warned that premature regulation could stifle innovation in this area. “Regulation can kill open source,” he said, urging governments to avoid restrictive laws that could hinder technological progress. He emphasised that no AI system has been shown to be inherently dangerous, suggesting that concerns about AI risks should not derail development.