A Google engineer has claimed that a synthetic intelligence programme he was working on for the tech giant has develop into sentient and is a “sweet kid”.
Blake Lemoine, who’s currently suspended by Google bosses, says he reached his conclusion after conversations with LaMDA, the corporate’s AI chatbot generator.
The engineer told The Washington Post that in conversations with LaMDA about religion, the AI talked about “personhood” and “rights”.
Mr Lemoine tweeted that LaMDA also reads Twitter, saying, “It’s a bit of narcissistic in a bit of kid kinda way so it’s going to have an amazing time reading all of the stuff that individuals are saying about it.”
He says that he presented his findings to Google vp Blaise Aguera y Arcas and to Jen Gennai, head of Responsible Innovation, but they dismissed his claims.
“LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as an individual,” the engineer wrote on Medium.
And he added that the AI wants, “to be acknowledged as an worker of Google relatively than as property”.
Now Mr Lemoine, who was tasked with testing if it used discriminatory language or hate speech, says he’s on paid administrative leave after the corporate claimed he violated its confidentiality policy.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence doesn’t support his claims,” Google spokesperson Brian Gabriel told the Post.
“He was told that there was no evidence that LaMDA was sentient (and a number of evidence against it).”
Critics say that it’s a mistake to imagine AI is anything greater than an authority at pattern recognition.
“We now have machines that may mindlessly generate words, but we haven’t learned easy methods to stop imagining a mind behind them,” Emily Bender, a linguistics professor on the University of Washington, told the newspaper.