Science

Experts say artificial intelligence could one day trigger a major nuclear holocaust

⇧ [VIDÉO] You may also like this affiliate content (after ads)

The last decade has been marked by incredible advances in technology based on artificial intelligence (AI). In particular, they are now so advanced that they can be used in a wide variety of fields such as medicine, art, security management, management, etc. However, some developments demonstrate that AI, as purely logical systems, can sometimes make decisions that are not necessarily in line with our moral values. In a preliminary study of the risks associated with artificial intelligence, 36% of experts believe that humanity can overtake this technology in this century, with no small risk of a global nuclear catastrophe. Moreover, given the speed of AI development, the magnitude of these risks may be underestimated.

Thanks to machine learning algorithms, AI is able to learn from vast amounts of information and is being used in many areas today. AI can do a job in a few weeks that would take a human specialist much longer. For example, he can detect more than 100 types of tumors with a higher level of certainty than a specialist with many years of experience. AI systems can also support risk and disaster management, for example by assessing the likelihood of a bridge collapsing, and thus save thousands of lives by improving prevention measures.

Big language AI models are even capable of integrating fields previously considered exclusively human, including art, by generating images on demand. Their ability to make rational and logical decisions has even allowed them to hold important leadership positions, such as the position of CEO of a large corporation.

However, experts in a new study, previously published on arXiv, believe that the use of AI for military purposes will be dangerous. Decision-making based solely on logic can, in particular, entail risks for humanity, since it will not necessarily take into account our moral and social values. What decision will be made by the AI, which must protect the planet, and who will have the deadly nuclear or bacteriological weapons? The risk to humans could be significant if such a scenario were to occur, because System X might at some point think that humans are a factor that needs to be eliminated in order to save the Earth.

In a less extreme scenario, AI automation could lead to major social change, especially in terms of the Industrial Revolution. Millions of people are at risk of being left without work, as in the period of industrial automation, when thousands of workers were unemployed.

Survey of 327 researchers

The new study, led by researchers at NYU’s Center for Data Science, examined the opinions of 327 researchers, all of whom are authors of a study on AI in natural language processing. The survey showed that 36% of these experts believe that a nuclear catastrophe involving AI is possible this century.

The fear of this apocalyptic scenario will further intensify in the specific responses of the female experts who participated in the survey, as well as in the responses of participants from certain minority groups. 46% of women and 53% of minorities considered this event possible. Moreover, the experts interviewed would be even more pessimistic about our ability to manage potentially dangerous future technologies.

In addition, 57% of the scientists who participated in the survey believe that large AI models could one day surpass human intelligence, and 73% believe that automating work with the help of AI will lead to profound social changes. The poll’s authors are more concerned about the direct risks of AI than all-out nuclear war. It should also be kept in mind that only a few hundred investigators participated in these reviews and these numbers may be underestimated.

archive

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.