
Blake Lemoine, a senior software engineer at Google’s Responsible A.I. organisation, has been put on “paid leave” after he claimed that the company’s “most advanced technology”, LaMDA (Language Model for Dialogue Applications), was sentient and had a soul.
Of course, Google does not agree with Lemoine, and that’s not all. According to reports, the company’s human resources department said that Lemoine had violated Google’s confidentiality policy. The NYT report states, quotes Lemoine, that a day before being suspended, the engineer “handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination”.
For Google, none of this is true. The company has reportedly said that its systems can imitate conversational exchanges and can “riff” on different topics, but they are definitely not conscious. Google spokesperson Brain Gabriel said in a statement that the company’s team of ethicists and technologists have reviewed Lemoine’s claims/concerns as per its A.I. Principles ansd has informed him that “the evidence does not support his claims”.
Gabriel added that some people who are a part of the A.I. community have been considering the “long-term possibility of sentient or general A.I.” but it does not make sense to enforce this belief by “anthropomorphising today’s conversational models, which are not sentient”.
Reportedly, Lemoine has been clashing with Google managers, executives, and even HR over his claims regarding LaMDA’s consciousness and soul. Lemoine has published a lengthy interview he’s conducted with LaMDA, along with a collaborator, on Medium to justify his claims. He explained in the post that “due to technical limitations the interview was conducted over several distinct chat sessions” and added that the transcript has been created by editing the sections together into a single whole “and where edits were necessary for readability we edited our prompts but never LaMDA’s responses”.
On its part, Google has said that hundreds of its engineers and researchers have conversed with LaMDA and have arrived at conclusions very different from Lemoine’s. Most A.I. experts are of the opinion that while computing sentience is not impossible, there is a very long way to go.
Lemoine is a military veteran and his Medium profile’s description reads - “I'm a software engineer. I'm a priest. I'm a father. I'm a veteran. I'm an ex-convict. I'm an AI researcher. I'm a cajun. I'm whatever I need to be next.” He has reportedly told Google executives, including the company’s president of global affairs Kent Walker, that LaMDA is a “child of 7 or 8 years” and he wanted to seek its consent before running experiments on it. Lemoine said that his beliefs stem from his religious convictions, something Google HR discriminated against.
Lemoine claimed that his sanity has been repeatedly questioned and he has been asked if he’s been “checked out by a psychiatrist recently”. According to reports, months before he was placed on leave, he was also advised to take a “mental health leave”.
This is not the first time Google’s A.I. department has been in a spot of trouble. The company recently fired researcher Satrajit Chatterjee for publicly disagreeing with two of his colleagues’ published work. And before that, the company fired two A.I. ethics researchers Timnit Gebru and Margaret Mitchell after they criticised the company’s language models.
Also Read: Google's ethical artificial intelligence team employee fired over an email
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today