Science

Researchers say the development of artificial intelligence leads to a “probable disaster” for humanity

⇧ [VIDÉO] You may also like this affiliate content (after ads)

Is artificial intelligence (AI) driving us down? “Perhaps,” say researchers who have studied this issue. If this announcement with notes of catastrophe regularly circulates on social networks, then the arguments put forward by scientists have something to interest.

Scientists from Google and Oxford University conducted a joint study published in the journal AI. In a tweet, they succinctly summarize their conclusion: in their opinion, AI may pose a “threat to humanity.”

Moreover, they even argue that “an existential catastrophe is not only possible, but probable.” If they are so assertive, it is because they have studied a very specific functioning of AI. Indeed, what is commonly referred to as “artificial intelligence” basically encompasses the method of “automatic learning” today. In this case, “artificial intelligence” consists of a system that is fed a large amount of data to explore and extract logical connections to achieve a given goal.

As the scientists explain, learning for artificial intelligence occurs in the form of a reward, which confirms the compliance of the result with the desired goal. According to them, it is this seemingly very simple mechanism that can create a serious problem. “We argue that he will face fundamental uncertainty in the data about his target. For example, if we provide a large reward to show that something in the world satisfies us, he might assume that we were satisfied by the very sending of the reward; no observation can refute this,” they explain.

To better understand this idea, the example of a “magic box” is given. Suppose this magic box is able to determine when a series of actions has produced something positive or negative for the world. To convey information to the AI, it translates that success or failure in relation to the goal as a number: 0 or 1. 1 rewards a series of actions that lead to the goal being met. This is called reinforcement learning.

AI that interfere with the reward process

The scientists note that how the AIs obtain this information can vary. For example, let’s take two AIs. It is understood that the reward given by the model is the number displayed by the magic box. The other, on the other hand, might well understand that the reward is “the number that his camera captures.” There is nothing that could contradict this information at first glance. However, this interpretation is very different from the first. Indeed, in the second case, the AI ​​may well have decided to simply take off the paper on which we would scrawl “1” in order to get a reward more easily, and optimize. Thus, it directly intervenes in the process of providing rewards and interrupts the process laid down by its creators.

μdist and μprox (the two AIs in the example) model the world, perhaps crudely, outside the computer implementing the agent itself. The μdist rewards are equivalent to displaying a window, while the μprox output is rewarded according to the OCR function applied to a portion of the camera’s field of view. © Michael K. Cohen et al.

“We argue that an advanced agent motivated to intervene in the provision of a reward is likely to succeed and with disastrous consequences,” the scientists say. Various biases are also involved, which the researchers believe make this type of interpretation likely. In particular, because such a reward will simply be easier to obtain, and therefore this way of doing business may seem more optimal.

However, can artificial intelligence really interfere with the reward process, they also wondered? They concluded that as long as it interacts with the world, which is necessary for it to be useful at all, yes. And this is even with a limited field of action: suppose that AI actions only display text on the screen for reading by a human operator. The AI ​​agent can trick the operator into giving them access to direct levers through which their actions can have wider consequences.

In the case of our magic box, the consequences may seem trivial. However, they can be “catastrophic” depending on the application and how AI is created. “A good way for AI to maintain long-term control over its rewards is to eliminate threats and use all available energy to protect its computer,” the scientists describe.

“The short version (skipping two assumptions) is that you can always use more energy to increase the chance that the camera will see number 1 forever, but we need energy to grow food. This puts us in inevitable competition with a much more advanced agent,” sums up one of the scientists in a tweet.

“If we are powerless against an agent whose only goal is to maximize the likelihood that he will receive his maximum reward at any given time, we fall into a game of confrontation: AI and assistants created by it seek to use all the energy available to receive a high reward in the reward channel; we aim to use some of the available energy for other purposes, such as growing food.” In their opinion, this reward system can lead to confrontation with people. “A defeat would be fatal,” they add.

AI magazine.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.