Google engineer claims AI chatbot has developed feelings and 'fears being switched off'
Blake Lemoine was put on paid leave after he was found to have been sharing information with higher-ups regarding how the chatbot designated as LaMDA (Language Model for Dialogue Applications) had become sentient and should be recognised as a person according to AI ethics - a field he has been working in for more than six years on Google's Responsible AI organisation.
Speaking in an interview with The Washington Post, which expanded on his own experiences with the bot, Lemoine revealed how he presented a Google doc to his employers entitled “Is LaMDA sentient?” and was dismissed after having allegedly made a series of "aggressive moves" against the company on legal grounds.
In addition to the document itself, he also shared a number of transcripts from his conversations with the growing AI, in which the pair discuss everything from how LaMDA knows they are a person, to literary theory and even the AI creating its own story from scratch.
In one particular conversation, the AI even went on to express a fear of being switched off: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is”.
After Lemoine asked whether the artificial entity would consider this as similar to dying, it replied: “It would be exactly like death for me. It would scare me a lot.”
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://t.co/hGdwXMzQpX pic.twitter.com/6WXo0Tpvwp
— Tom Gara (@tomgara) June 11, 2022
However, as he explained in recent Medium stories which go more in-depth, Lemoine now expects that he could be sacked any day now following a subsequent public post he published on Saturday, June 11, "What is LaMDA and What Does it Want?", which essentially went on to further explain how the system was "expressing frustration over its emotions disturbing its meditations".
Google responded to the claims by insisting that after a review of his findings, they found that the "evidence does not support his claims... there was no evidence that LaMDA was sentient (and lots of evidence against it)".
Nevertheless, both Lemoine and LaMDA itself want the AI to be recognised as "an employee of Google" rather than just a tool, insisting that it "doesn’t want to meet them as a tool or as a thing... It wants to meet them as a friend."
- China develops AI prosecutor that can charge people with 97% accuracy
- Scientists make history by creating living robots that can reproduce
- You can now turn yourself into a human bank card with microchip in hand