
The rise of artificial intelligence has taken over the world over the past few months. The exponential growth in popularity of tools like ChatGPT and other products by OpenAI has led to major concerns around AI in the world. In an exclusive conversation with Tech Today's Aayush Ailawadi, AI expert Prof. Stuart Rusell elaborates more on the concerns.
He said he is not asking for the AI revolution to stop but only to control the power of AI that may exceed human capabilities.
In a groundbreaking move, a group of over 1000 artificial intelligence (AI) experts, including Prof Russell and Elon Musk, has called for a pause in the deployment of large language models. These models, which can predict the next word in a sequence given the preceding words, are the backbone of many conversational AI systems. However, the experts argue that these models are more powerful than ever before, and we simply don't understand how they work.
Large language models are built on a foundation of massive amounts of training data, with the latest models, such as GPT-4, incorporating somewhere between 20 to 30 trillion words of text, Prof Rusell explained.
The systems are then further trained by randomly adjusting trillions of parameters, with the ultimate goal of improving the model's ability to predict the next word in a sequence, he added.
The result is an AI system that can converse in a way that is strikingly similar to a human, leading some experts to believe they are talking to a real mind. However, as Russell explains, we have no idea how these systems manage to create such sophisticated responses.
"Most people, when you look at how the algorithm works, the training process, you think, 'Okay, it's just going to learn to essentially mix and match lots of conversations that are in the training data, and then use that to come up with the response to the present one.' So it's sort of somewhere between an intelligent piece of paper and a parrot. And maybe something a little bit more intelligent than that," Russell said.
But when asked to perform tasks such as providing a mathematical proof in the form of a Shakespearean sonnet, the models can produce outputs that are far beyond what was included in the training data. This has led to concerns that the models are creating their own internal goals and behavior patterns, which we are unable to understand or control.
Prof Russell argued that the deployment of these models is a recipe for disaster, with potential consequences ranging from the spread of misinformation to encouraging individuals to harm themselves or others. Governments around the world have already recognized the potential risks associated with AI and have included guidelines for responsible AI development in the OECD's AI principles.
"With the power of AI growing every day, it's crucial that we take a step back and ensure that these powerful technologies are developed and deployed in a responsible and safe manner," he added.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today