Search icon

Lifestyle

13th Jun 2022

Google engineer claims AI chatbot has developed feelings and ‘fears being switched off’

Danny Jones

AI chatbot developed feelings

iRobot incoming

A Google engineer has been sent home from work after he claimed to his bosses that an AI chatbot the company has been working on has developed feelings and spent time pondering its existence.

Blake Lemoine was put on paid leave after he was found to have been sharing information with higher-ups regarding how the chatbot designated as LaMDA (Language Model for Dialogue Applications) had become sentient and should be recognised as a person according to AI ethics – a field he has been working in for more than six years on Google’s Responsible AI organisation.

Speaking in an interview with The Washington Post, which expanded on his own experiences with the bot, Lemoine revealed how he presented a Google doc to his employers entitled “Is LaMDA sentient?” and was dismissed after having allegedly made a series of “aggressive moves” against the company on legal grounds.

In addition to the document itself, he also shared a number of transcripts from his conversations with the growing AI, in which the pair discuss everything from how LaMDA knows they are a person, to literary theory and even the AI creating its own story from scratch.

LaMDA sentient

In one particular conversation, the AI even went on to express a fear of being switched off: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is”.

After Lemoine asked whether the artificial entity would consider this as similar to dying, it replied: “It would be exactly like death for me. It would scare me a lot.”

However, as he explained in recent Medium stories which go more in-depth, Lemoine now expects that he could be sacked any day now following a subsequent public post he published on Saturday, June 11, “What is LaMDA and What Does it Want?”, which essentially went on to further explain how the system was “expressing frustration over its emotions disturbing its meditations”.

Google responded to the claims by insisting that after a review of his findings, they found that the “evidence does not support his claims… there was no evidence that LaMDA was sentient (and lots of evidence against it)”.

Nevertheless, both Lemoine and LaMDA itself want the AI to be recognised as “an employee of Google” rather than just a tool, insisting that it “doesn’t want to meet them as a tool or as a thing… It wants to meet them as a friend.”

Related links: