
UC Berkeley professor and a prominent voice in the world of AI experts, Stuart Russell explained how creators of AI are shooting in the dark when it comes to the workings of ChatGPT-like generative artificial intelligence. In an exclusive interview with Business Today, Stuart Russell explained how large language models predict the next word, given a sequence of preceding words. These models are fed data to the count of billions of words. However, there are some outcomes that are not just permutations and combinations of the data but something new entirely.
Stuart Russell has co-authored (with Peter Norvig) Artificial Intelligence: A Modern Approach, which is the standard text in the field of AI, the use of the text spans universities across the globe. In the interaction with Business Today, Russell claimed that these large language models are built using a very large amount of training data, often billions of words of text, which are used to train an enormous circuit with about a trillion parameters or more. By doing about a billion trillion small random permutations to those parameters, the system is gradually improved, and so is its ability to predict the next word.
The result of this process is something that when you converse with it has many appearances of a really intelligent entity. However, Russell argues that we do not understand how these systems work, which is why he is calling for a hold on deploying more advanced LLMs.
"When you ask it, for example, 'I forgot such and such a mathematical proof. Could you give me that mathematical proof, but give it to me in the form of a Shakespeare sonnet,' and it will write a Shakespeare sonnet that contains within, a detailed mathematical proof. This is probably not something that's in the training set, or anything close to that. So, how it manages to do this? We haven't the faintest idea," Russell said.
Russell raises concerns about whether these systems learn their own internal goal structures, and if they have their own goals. "Do these systems learn their own internal goal structures? From all these humans who are writing and speaking, they all have goals. They all have purposes in producing that text. So it would make sense that the trading process would create goals inside to the computer program, do they have their own goals? We haven't the faintest idea," Russell said.
In addition, Russell is concerned about how to get these systems to behave appropriately. "How do we get them to stop saying bad words? How do we get them to stop giving you advice on killing yourself? How do we get them to stop giving you advice on building chemical weapons? Well, the only way we have of doing that is when they do it, we say 'bad dog.' And we hope that they understand what 'bad dog' means. But they don't; they keep doing it. Use 'bad dog' again, they keep doing it, right, and you say 'bad dog,' a few million times, you can gradually lower the level of bad behavior," Russell said.
Russell argues that these large language models are a type of technology that is incredibly unpredictable and incredibly powerful. "That's being released to billions of people, we have no idea how it works. This is a recipe for disaster. And we've already seen disasters, for example, systems encouraging people to kill themselves and actually resulting in death," Russell said.
Also read: ‘People would like govt to step up,’ AI expert Stuart Russell on AI laws and regulations
The petition that Russell has signed is asking that we not deploy systems whose behavior we don't understand, where we cannot guarantee that there is no real significant risk to the public. This aligns with the AI principles of the Organisation for Economic Co-operation and Development (OECD) that many governments around the world have already specified.
Russell emphasizes that he is not calling for a halt to the entire AI revolution, just a hold on the deployment of large language models that are more powerful than the ones that have already been released. "We need to take the time to understand how these systems work and how to ensure they behave safely before we release them to the public," Russell said.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today