Connect with us

Tech

Google suspends an engineer who claims its AI system has grown sentient

Google says Blake Lemoine allegedly breached company confidentiality regulations

Published

on

Google suspends an engineer who claims its AI system has grown sentient

Google has placed one of its engineers on paid administrative leave for allegedly breaching company confidentiality regulations after he became concerned that an AI chatbot system had attained sentience, reports the Washington Post. Blake Lemoine, an engineer at Google’s Responsible AI division, was exploring whether its LaMDA model promotes discriminatory or hate speech.

The engineer’s worries were apparently sparked by the AI system’s compelling replies to questions about its rights and robotics ethics. In April, he shared with executives a document titled “Is LaMDA Sentient?” This document included a transcript of his conversations with the AI. After being placed on leave, Lemoine published the transcript via his Medium account, in which he claims the AI argues “that it is sentient because it has feelings, emotions, and subjective experience.”

According to The Washington Post and The Guardian, Google believes Lemoine’s activities connected to his work on LaMDA have broken corporate confidentiality regulations. He allegedly summoned a lawyer to represent the AI system and met with a member of the House Judiciary Committee to discuss alleged unethical practices at Google. The engineer stated in a Medium post on June 6th, the day he was placed on administrative leave, that he sought “a minimal amount of outside assistance to help guide me in my investigations,” and that the list of persons he had contacts with included US government employees.

Google formally debuted LaMDA at Google I/O last year

Last year, at Google I/O, the search giant formally debuted LaMDA, which it thinks would improve its conversational AI assistants and allow for more natural discussions. Similar language model technology is already used by the firm for Gmail’s Smart Compose function and search engine inquiries.

Google says that there is “no evidence” that LaMDA is sentient

A Google official told Washington Post that there is “no evidence” that LaMDA is sentient.  “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” said spokesperson Brian Gabriel.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel said. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Gabriel said.

A linguistics professor interviewed by the Washington Post agreed that equating impressive written replies with sentience is incorrect. “We now have machines that can create words without thinking, but we haven’t figured out how to stop assuming a mind behind them,” said Emily M. Bender of the University of Washington.

Despite his concerns, Lemoine stated that he plans to continue working on AI in the future. “Whether Google keeps me on or not,” he added in a tweet, “my intention is to stay in AI.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.