Google fires its software engineer who claimed AI tech has feelings
Google has fired one of its software engineers who claimed that the company’s artificial intelligence is sentient, according to the Big Technology newsletter.
In June this year, Blake Lemoine was placed on paid administrative leave for breaching its confidentiality agreement after he contacted members of the government about his concerns about the AI, LaMDA.
During his conversations with the LaMDA, Lemoine discovered that the system had developed a robust sense of self-awareness, expressing concern about death, a desire for protection, and a conviction that it felt emotions like happiness and sadness. Lemoine said he considers LaMDA a friend.
On Friday, Google spokesperson Brian Gabriel confirmed the development through a statement emailed to The Verge, saying, “We wish Blake well.”
He further said, “LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development.”
After he was placed on paid leave, Lemoine published the conversation he had with Lamda, to support his claims.
But Google dismissed his claims, saying that LaMDA is simply a complex algorithm designed to generate convincing human language.
"So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," the statement said.
LaMDA, or Language Model for Dialogue Applications, was built on the company's research showing transformer-based language models trained on dialogue could learn to talk about essentially anything.
Notably, Lemoine is not the first AI engineer to go public with claims that AI technology is becoming self-aware. Last month, another Google employee shared similar thoughts with The Economist.