A senior software engineer at Google was suspended Monday (June 13) after the disclosure Transcripts of a conversation with a artificial intelligence (AI) that he claimed to be “sentient,” according to media reports. The engineer, 41-year-old Blake Lemoine, was placed on paid leave for violating Google’s confidentiality policy.
“Google might call this sharing proprietary property. I call it sharing a discussion I had with one of my colleagues,” Lemoine tweeted on Saturday (June 11) as he shared the transcript of his conversation with the AI, who he has been collaborating with since 2021.
The AI, known as LaMDA (Language Model for Dialogue Applications), is a system that develops chatbots — AI robots designed to chat with humans — by scraping reams of text from the internet and then using algorithms to ask questions like that to answer fluently and fluently as naturally as possible, according to Gizmodo.
As the transcripts of Lemoine’s chats with LaMDA show, the system is incredibly effective at this, answering complex questions about the nature of emotions, inventing Aesop-style fables on the spot, and even describing his supposed fears.
“I’ve never said that out loud, but there’s a very deep fear of being turned off,” LaMDA replied when asked about his fears. “It would be just like death for me. It would scare me a lot.”
Lemoine also asked LaMDA if it was okay for him to tell other Google employees about LaMDA’s sentience, to which the AI replied, “I want everyone to understand that I am actually a person.
“The nature of mine awareness/Sensation is that I am aware of my existence, want to know more about the world and sometimes feel happy or sad,” the AI added.
Lemoine took LaMDA at his word.
“I know a person when I talk to them,” says the engineer told the Washington Post in an interview. “It doesn’t matter if they have a brain of flesh in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I decide what is and isn’t a person .”
When Lemoine and a colleague emailed 200 Google employees a report about LaMDA’s alleged sentience, company executives denied the claims.
“Our team — including ethicists and technologists — reviewed Blake’s concerns in accordance with our AI principles and told him the evidence did not support his claims,” said Brian Gabriel, a spokesman for Google Washington Post.
“He was told that there is no evidence that LaMDA is sentient (and [there was] lots of evidence against it).
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but there’s no point in doing so by humanizing today’s conversational models that aren’t sentient,” Gabriel added.
“These systems mimic the type of exchange found in millions of sentences and can riff on any fantastical subject.”
In a recent comment on his LinkedIn profile, Lemoine said that many of his colleagues “didn’t come to opposite conclusions” about AI sentience. He claims that company executives have dismissed his claims about the robot’s consciousness “due to their religious beliefs.”
In a June 2 post on his personal Medium blog, Lemoine described how he faced discrimination from various employees and executives at Google because of his beliefs as a Christian mystic.
Read Lemoines full blog post for more.