Conscious and dangerous AI like Skynet is inevitable, says former Google boss

In a recent interview, a former Google CEO voiced his fears that artificial intelligence is evolving too rapidly towards a possible “conscious version,” called “general artificial intelligence” (AGI). In particular, he states that, according to him, AI now represents “a great threat to humanity.”

In an astonishing speech to a former executive at such a company, he adds that the kind of artificial intelligence that could one day become worthy of a sci-fi movie – and could even turn out to be dangerously conscious and uncontrollable, for example. like Skynet from the “Terminator” movie series.

Mo Gawdat, a former chief commercial officer for Google’s “moonshot” organization, which was called Google X at the time, sounded the warning in a new interview with The Times. In particular, he states that he is convinced that AGI is inevitable and “that once it is there, humanity could very well find itself facing an apocalypse caused by a machine far superior to man.”

Related Articles

The Strange Valley: “I was completely froze”

Gawdat told The Times that he received this chilling revelation while working with artificial intelligence developers at Google X, who were developing robotic arms that could find and pick up a small ball. After a period of slow progress, Gawdat explains that an arm had caught the ball and appeared to present it to investigators in a gesture that he saw as a “boast.” “And suddenly I realized that it was really scary,” Gawdat said. “It froze me completely.” “The reality is … we are creating God,” he added. By “God”, he implies an omniscient entity, omnipresent and superior to Man.

However, it must be admitted that their fears are somewhat exaggerated when knowing what they are based on initially. Gawdat interpreted this movement (that of the robotic arm) in a certain way, which he attributed, among other things, to an emotion or gesture that he has already seen in a human. This “haunting” frontier of humanoid robotics is called the “Valley of Strange,” first mentioned by Japanese robotist Mori Masahiro in 1970.

According to this theory, the more similar a robot is to a human being, the more terrifying its imperfections seem to us, as was the case with the robotic arm mentioned by Gawdat. And that is why many feel more comfortable in front of a clearly artificial robot than in front of a robot with skin, clothes and a face designed to pass as human.

When AI evolves faster than society …

AI fears abound in the tech industry: Elon Musk, for example, has repeatedly warned the world of the dangers of AI one day conquering humanity. But this kind of speculative perspective overlooks the real dangers and damage to AI that we already suffer.

For example, facial recognition and predictive surveillance algorithms have done real harm in underserved communities. Countless algorithms continue to propagate and encode prejudice, even racism, across fields, like a Facebook AI that was recently blamed after categorizing black men who appear in a video as “primates.” Not to mention AI serving the military, which will soon make critical decisions on its own.

These issues can be addressed through supervision and regulation, but given the rapid evolution and, more importantly, the increasingly diverse application of AI, the challenge is daunting. Therefore, it will be necessary, at some point, to limit the applications of the underlying technologies so that they can follow regulation and supervision.

Back to top button