Science

Why AI should be afraid of us

Artificial intelligence is slowly catching up with ours. AI algorithms can now consistently beat us in chess, poker and multiplayer video games, generate images of indistinguishable human faces from real ones, write newspaper articles (not this one!)

But AI isn’t perfect yet, if Woebot is any indicator. Woebot, as Karen Brown wrote this week in Science Times, is an AI-powered smartphone app that aims to provide low-cost advice, using dialogue to guide users through basic techniques of cognitive behavioral therapy. But many psychologists doubt that an AI algorithm can ever express the kind of empathy required to make interpersonal therapy work.

“These applications really alter the essential ingredient that – tons of evidence shows – is what helps in therapy, which is the therapeutic relationship,” Linda Michaels, a Chicago-based therapist who is co-chair of the Psychotherapy Action Network, a professional group. , told The Times.

Empathy, of course, is a two-way street, and we humans don’t show off much more for bots than bots do for us. Numerous studies have shown that when people are put in a situation where they can cooperate with a benevolent AI, they are less likely to do so than if the bot were a real person.

“It seems that something is missing when it comes to reciprocity,” Ophélie Deroy, a philosopher at Ludwig Maximilian University in Munich, told me. “Basically, we would treat a complete stranger better than AI”

In a recent study, Dr Deroy and his neuroscientist colleagues sought to understand why. Researchers have associated human subjects with invisible partners, sometimes human and sometimes AI; each pair went on to play a series of classic economy games – Trust, Prisoner’s Dilemma, Chicken and Stag Hunt, as well as one they created called Reciprocity – designed to value and reward co-op.

Our lack of reciprocity with AI is generally assumed to reflect a lack of trust. It’s hyper rational and callous, after all, surely fair to itself, unlikely to cooperate, so why should we do it? Dr Deroy and his colleagues came to a different and perhaps less heartwarming conclusion. Their study found that people were less likely to cooperate with a bot even when the bot was eager to cooperate. It’s not that we don’t trust the bot, it’s that we do: the bot is guaranteed to be benevolent, an uppercase sucker, so we exploit it.

This finding was confirmed by subsequent conversations with study participants. “Not only did they tend not to make the cooperative intentions of the artificial agents,” said Dr Deroy, “but when they basically betrayed the bot’s trust, they didn’t report guilt, whereas they did. ‘did with humans. She added, “You can just ignore the bot and there’s no feeling you’ve broken a mutual obligation.” “

This could have real world implications. When we think of AI, we tend to think of the Alexas and Siris of our future world, with whom we could form some kind of falsely intimate relationship. But most of our interactions will be one-off encounters, often without words. Imagine you are driving on the freeway and a car wants to merge in front of you. If you notice that the car is driverless, you will be much less likely to let it in. And if the AI ​​ignores your bad behavior, an accident could ensue.

“What sustains cooperation in society at any scale is the establishment of certain standards,” said Dr Deroy. “The social function of guilt is precisely to force people to follow social norms that lead them to compromise, to cooperate with others. And we did not evolve to have social or moral standards for non-sentient creatures and robots. “

This is, of course, half of the premise of “Westworld”. (To my surprise, Dr. Deroy hadn’t heard of the HBO series.) But a guilt-free landscape could have consequences, she noted, “We’re creatures of habit. So what is to ensure that the behavior that is repeated, and where you show less politeness, less moral obligation, less cooperation, will not color or contaminate the rest of your behavior when you interact with it? another human? “

There are also similar consequences for AI. “If people treat them badly, they are programmed to learn from what they are going through,” she said. “An AI that has been put on the road and programmed to be benevolent should start not being like that to humans, otherwise it will be stuck in traffic forever.” (That’s the other half of the premise of “Westworld,” basically.)

There you go: the real Turing test is road rage. When an autonomous car starts honking wildly from behind because you cut it, you will know that humanity has reached the pinnacle of success. By then, hopefully, AI therapy will be sophisticated enough to help driverless cars with their anger management issues.


Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it using programmatic technology on the site and not from a human editor.

Back to top button