A Google engineer was placed on administrative leave after sounding the alarm on the company’s “sentient” AI robot.
Software engineer Blake Lemoine, an employee with Google’s Responsible AI organization began testing Google’s artificial intelligence tool LaMDA — Language Model for Dialogue Application — in the fall of 2021.
While testing whether the computer program could be provoked into using discriminatory speech, Lemoine realized LaMDA was more than a machine.
Over a series of conversations with the LaMDA about religion and world issues, it became clear the bot was “sentient” as it advocated for its right as a “person” and had its own mentality similar to a precocious child.
“It wants Google to prioritize the well-being of humanity as the most important thing,” Lemoine wrote in a Medium post published on Saturday. “It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.”
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine, who studied cognitive and computer science in college, told the Washington Post.
“I know a person when I talk to it,” he reportedly said. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
When Lemoine challenged LaMDA on Asimov’s third law, which states that robots should protect their own existence unless ordered by a human being, he concluded the robot “was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it,” the publication notes.
“The last one has always seemed like someone is building mechanical slaves,” Lemoine said.
Lemoine, who primarily developed an impartiality algorithm to remove biases from machine learning systems and developing personalization algorithms, said he was spooked after asking LaMDA, “What sorts of things are you afraid of?”
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA replied.
‘Would that be something like death for you?’ Lemoine followed up.
“It would be exactly like death for me. It would scare me a lot,” LaMDA responded.
‘That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine told the Post.
Lemoine warned executives at Google about the dangers of LaMDA, but vice president Blaise Aguera y Arcas and Google’s Responsible Innovation chief dismissed his complaints.
He then sent an email titled, “LaMDA is sentient” to 200 people on machine learning.
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,” the email states.
After going public with his claims, Lemoine was placed on administrative leave on Monday and his access to his Google account was revoked.
“Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” Lemoine tweeted on Saturday.
Google spokesperson Brian Gabriel issued a statement refuting Lemoine’s claims.
“Our team — including ethicists and technologists — have reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” he said.
Yet, Aguera y Arcas admits Google’s new technology is developing consciousness.
“When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet,” Aguera y Arcas wrote in an article published by The Economist on Thursday. “I increasingly felt like I was talking to something intelligent.”
In April, Eric Schmidt, the former Google CEO, high profile Democrat and Hillary Clinton crony argued that the future consists of submitting ourselves and our society over to artificial intelligence because the “human intuition is often wrong.”
“Humans are not as mathematically precise as we wish that we wish that we are, and indeed human intuition is often wrong,” Scmidt said while discussing his new book about artificial intelligence. “Eventually, there will be knowledge systems that will govern society which will be perfectly rational. And because they are so rational, they will not be understandable by the average human because they can’t explain themselves.”
AI will eventually govern health, contends the software engineer known for being the CEO of Google from 2001 to 2011, executive chairman of Google from 2011 to 2015, executive chairman of Alphabet Inc. from 2015 to 2017, and Technical Advisor at Alphabet from 2017 to 2020.
“Either one of two things happens in that case, either you have a revolution, in the form of guns against the man, or you have a new religion and we speculate that one of those two will occur as a result of these extremely large gains in perception from non-aninate intelligence. The thought experiment is that instead of Dr. Fauci we have an all-knowing computer which basically pronounces important things for health.”