Science

AI is as dangerous as an atomic bomb

At the Aspen Security Forum in the US, ex-Google CEO Eric Schmidt sounded the alarm about the danger posed by AI.

When we talk about artificial intelligence, we most often emphasize its skill. He is truly capable of great things. However, there are also such informed people who will not fail to warn about the danger that such an invention poses to humans. The last person is Eric Schmidt. The former boss of the Mountain View firm took to the Aspen security forum to share his concerns.

AI is the new atomic bomb

On August 6, 1945, the United States dropped the first atomic bomb on Hiroshima. 3 days later, another bombing of Nagasaki. Even today, these cities continue to suffer the consequences of these pesky missiles.

This is what former Google boss Eric Schmidt likens to artificial intelligence. For him, this is as dangerous as these nuclear weapons. He would be capable of destroying human existence. That is why he appeals to the world powers China and the United States to come to an agreement on this issue.

“In the 1950s and 1960s, we finally created a world where there was a ‘no surprises’ rule regarding nuclear testing. […] You have to start building a system where as you arm or prepare, you run the same as the other side. We don’t have anyone working on it, and yet the AI ​​is so powerful,” said Eric Schmidt.

Therefore, it is important to act.

READ ALSO: Artificial intelligence is first recognized as a patent holder

Artificial intelligence must be regulated

To further represent the danger looming over humanity with the help of artificial intelligence, Eric Schmidt recommends a deterrence pact that exists today between states that possess these weapons of mass destruction. This treaty was born after the Second World War, when the largest countries of the world began to arm themselves with nuclear weapons. This international agreement now prohibits any state from conducting nuclear tests without prior warning to other states.

Artificial intelligence must be regulated

The former Google boss then believes that such an agreement should be made to regulate AI. Indeed, one day it may prove dangerous to all people. This statement also confirms the reasons why the tech launched the AI2050 fund in February 2022. This should allow funding for “research on ‘hard problems’ in artificial intelligence.” Taking into account the problems, in particular, the excesses of artificial intelligence, the bias of programming algorithms, geopolitical conflicts.

However, this is not the first time that such a warning about artificial intelligence has been issued by a Google employee. In 2018, Sundar Pichai, the current head of Google, already expressed his concerns.

“Advances in artificial intelligence are still in their infancy, but I believe that this is the deepest technology that humanity will work on, and we need to make sure that we use it for the benefit of society (…) Fire also kills people. We have learned to control it for the good of humanity, but we have also learned to deal with its bad sides.

We may not be at that level yet, but the omens are already there. A few days ago, Blake Lemoine, a Google employee, reported that the artificial intelligence he was working on had become conscious. Blake was fired a few days later.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.