Google has rejected claims by one of its engineers that the Language Model for Dialogue Application (Lamda) is sentient. Lamda is a ground-breaking technology that can engage in free-flowing conversations.
Google spokesman Brian Gabriel wrote in a statement that engineer Blake Lemoine “was told that there was no evidence that Lamda was sentient (and lots of evidence against it).”
After the claims were denied, the company also placed the engineer on paid leave.
Blake Lemoine works in Google’s Responsible AI division. According to the engineer, Lamda have its own feelings. They published a conversation he and a collaborator at the firm had with Lamda to back up his claims. Lemoine called the chat “Is Lamda sentient? – an interview.”
During the chat, Lamda outlined his sentient characteristics as it claims “I feel happy or sad at times.”
Lemoine therefore urges Google to recognize its creation’s “wants” including the need to be treated as an employee of Google and obtain consent before using it for experiments.
The sources for this piece include an article in BBC.