Science

Google fires engineer who warned company’s artificial intelligence was sentient – Reuters

On Friday, Google fired Blake Lemoine, a software engineer who publicly raised concerns that the conversational technology the company was developing had reached sentience.

Lemoine went out of business to consult with experts about the technology’s potential sensitivities, then publicly shared his concerns in a post and subsequent interview with The Washington Post. Google had Lemoine suspended in June for violating privacy policy and now he’s fired. Lemoine himself is expected to explain what happened in an upcoming podcast episode for Big Technology, the subgroup that first reported the story.

Google continues to deny that its LaMDA technology, or language model for conversational applications, has made sense. The company says LaMDA has gone through 11 separate reviews, and in January the company published a research paper on the technology. But Lemoine’s offense that deserved to be fired was sharing inside information, Google said in a statement.

“Unfortunately, despite his long involvement in this topic, Blake still chooses to aggressively violate clear employment and data security policies that include the need to protect product information,” Google said in a statement. “We will continue our careful development of language models and wish Blake all the best.”

LaMDA is described as a sophisticated chatbot: send it a message and it will automatically generate a response appropriate to the context, Google spokesman Brian Gabriel said in an earlier statement. “If you ask what it’s like to be an ice cream dinosaur, they can generate text about melting, roaring, etc.”

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.