Science

Google fired engineer who claimed AI was conscious

Google fired Blake Lemoine, an engineer who claimed in June 2022 that the company’s artificial intelligence had become sentient.

AI that would develop its own thoughts and feelings

LaMDA (Language Model for Conversational Applications) is, as the name suggests, a language model capable of engaging in conversations with people in the form of a chatbot. Google’s flagship artificial intelligence was trained through conversations and demonstrated at the 2021 I/O conference.

In the same category

DALL-E 2 artificial intelligence goes into beta and aims for one million users

In June, engineer Blake Lemoine announced that artificial intelligence is conscious and endowed with sensitivity. After several conversations with the model, Lemoine confirmed that he had developed his own thoughts and feelings, as well as a strong self-awareness, in particular the expression of anxiety about death and the belief that he experienced emotions such as happiness and sadness.

The engineer was convinced that Google teams should get LaMDA’s approval before experimenting with the model, and claimed to have given documents to the senators without specifying to whom. According to Big Technology, he was put on paid leave by the Mountain View firm after the case, but was finally fired.

An example of a conversation between a LaMDA and a human.

An example of a conversation between a LaMDA and a human. Image: Google

Google clearly contradicts these statements

Indeed, Google explicitly refuted Blake Lemoine’s words and assured that several tests of the AI ​​were carried out, and all of them concluded that he was not conscious. Moreover, many specialists in language models claim that the technology is not yet sufficiently developed to reach such a level. Here is the Google statement:

“As we state in our AI Principles, we take AI development very seriously and remain committed to responsible innovation. LaMDA has gone through 11 separate reviews, and earlier this year we published a research paper detailing the work required to develop it responsibly. If an employee comes to us with complaints about our work, as Blake did, we study them in detail. We found Blake’s claim that LaMDA was sentient to be completely unfounded and worked with him for many months to resolve the issue. These discussions were part of an open culture that helps us innovate responsibly. It is therefore unfortunate that, despite being involved in this topic for a long time, Blake has chosen to continually violate clear employment and data security rules, which include the need to protect product information. We will continue to carefully develop language models and wish Blake all the best.”

For his part, Lemoine explains that he contacted lawyers to explore options.

Bad ads for Google

This case, although it seems to have been resolved, is not very positive for Google’s image, since artificial intelligence is a very important area for the company’s future. This is all the more true as it has been rocked by several scandals specifically related to artificial intelligence in recent years. In 2020, for example, Mountain View firm fired Timnit Gebra, a member of its AI ethics team, after she denounced her employer’s practices at the time.

Two months later, his co-head of ethical AI at Google, Margaret Mitchell, was also fired for violating the company’s code of conduct. After these two layoffs, Sami Bengio, leader of Google Brain, has made the decision to step down. Here the case with Blake Lemoine is different, but it is reminiscent of the difficult history of Mountain View with its AI division. Moreover, in 2021, the firm announced the termination of several AI-related projects in the name of ethics.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.