Science

Google Fires Researcher Who Claims AI LaMDA Is Conscious

Google fired an engineer who said LaMDA’s artificial intelligence was conscious. Claims that were very poorly received by the research community.

Blake Lemoine, an engineer with 7 years of experience at Google, has been fired, according to Alex Kantrowitz in the Big Technology newsletter. The news was supposed to be revealed personally by Blake Lemoine during the recording of the podcast of the same name, the episode of which has not yet been published. However, Google confirmed the information to Engadget.

Google Fires Engineer Who Claims LaMDA Artificial Intelligence Gained Consciousness

Blake Lemoine, who until recently was a member of the Responsible AI team, contacted the Washington Post last month to report that one of the US AI giant’s projects appeared to have a conscience. The artificial intelligence in question, LaMDA, for a language model for conversational applications, was introduced by Google last year. This was to allow computers to better reproduce open conversations. Blake Lemoine was convinced that LaMDA had a conscience, but also doubted the possibility that she had a soul. And to leave no doubt about these statements, he even told Wired the following: “I am deeply convinced that LaMDA is a person.”

After giving his statements to the press, most likely without the permission of his employer, Blake Lemoine was placed on administrative leave. Google has also publicly stated several times that its AI is in no way conscious.

Statements that were very poorly received by the research community

Several members of the AI ​​research community were also quick to speak out against Blake Lemoine. Margaret Mitchell, who was fired from Google after speaking out about the company’s lack of diversity, tweeted that systems like LaMDA don’t develop intentions, they “replicate the way people express communicative intentions in the form of texts. With markedly less tact, Gary Marcy qualified Blake Lemoine’s statements as “bullshit in a bar”.

Google’s statement to Engadget reads: “In line with our AI Principles, we take AI development very seriously and are committed to responsible innovation. LaMDA has gone through 11 different detailed reviews and a few months ago we published an article detailing the work on this critical development. If an employee has concerns about our work, as Blake Lemoine did, we consider them wisely. We have concluded that Blake Lemoine’s claims about LaMDA consciousness are completely unfounded and have been working with him to figure this out for several months. These discussions took place within our open culture, which helps us innovate responsibly. Unfortunately, despite the discussions on this topic, Blake Lemoine decided to violate the current rules of the company, in particular the need to protect information about products. We will continue to develop language models with care and wish Blake Lemoine all the best for the future.”

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.