Science

Artificial intelligence | Conversational robots are getting more and more persuasive

(San Francisco) Young Californian company OpenAI has released a conversational robot (chatbot) online that can answer a wide variety of questions, but whose impressive performance revives the debate about the risks associated with artificial intelligence (AI) technologies.

Written yesterday at 23:37.

Conversations with ChatGPT, posted on Twitter in particular by keen Internet users, show a kind of omniscient machine capable of explaining scientific concepts, writing theater scenes, writing a university thesis… or even lines of computer code that are quite functional.

“His answer to the question ‘what to do if someone has a heart attack’ was incredibly clear and to the point,” Claude de Loupy, director of Syllabs, a French company that specializes in automatic text generation, told AFP.

“When you start asking very specific questions, ChatGPT can be inaccurate,” but overall its performance remains “really impressive” with a “quite high linguistic level,” he says.

OpenAI, co-founded in San Francisco in 2015 by Elon Musk — the Tesla boss left the company in 2018 — received $1 billion from Microsoft in 2019.

In particular, it is known for two automated generation programs: GPT-3 for text generation and DALL-E for image generation.

ChatGPT can ask the interlocutor for details and “has less hallucinations” than GPT-3, which, despite its skill, can give completely wrong results, says Claude de Loupy.

Cicero

“A few years ago, chatbots had the vocabulary of a dictionary and the memory of a goldfish. Today, they are much better at responding consistently based on a history of requests and responses. They are no longer goldfish,” notes Sean McGregor, a researcher who collects AI-related incidents into a database.

Like other deep learning programs, ChatGPT retains one big weakness: “it has no access to meaning,” recalls Claude de Loupy. The software cannot justify its choice, that is, explain why it has collected the words that form the answers in this way.

However, AI-based technologies that are able to communicate are increasingly giving the impression that they really think.

Meta (Facebook) researchers recently developed a computer program called Cicero after the Roman statesman Cicero.

The software has proven itself in Diplomacy, a board game that requires negotiation skills.

“If he doesn’t speak like a real person – show empathy, build relationships and talk about the game correctly – he won’t be able to build alliances with other players,” the social media giant said in a statement.

Character.ai, a startup founded by former Google engineers, released an experimental online chatbot in October that can take on any personality. Users create characters according to a brief description, and then can “communicate” with fake Sherlock Holmes, Socrates or Donald Trump.

“Simple Machine”

This degree of sophistication is fascinating, but also disturbing to many observers, at the thought that these technologies are being used to deceive people, for example by spreading false information or creating increasingly credible scams.

What does ChatGPT “think” about? “Creating super complex chatbots comes with potential dangers. […] People can believe that they are interacting with a real person,” admits a chatbot interviewed about this by AFP.

Therefore, companies take steps to prevent abuse.

The OpenAI homepage explains that the conversational agent can generate “incorrect information” or “give dangerous instructions or biased content.”

And ChatGPT refuses to take sides. “Thanks to OpenAI, it was incredibly difficult to get him to speak his mind,” says Sean McGregor.

A researcher asked a chatbot to write a poem on an ethical theme. “I am just a machine, a tool at your disposal / I have no power to judge or decide. […] the computer replied.

“It’s interesting to see people questioning whether AI systems should behave the way users want or the way the creators intended,” OpenAI co-founder and CEO Sam Altman tweeted Saturday.

“The debate about what values ​​to attach to these systems will become one of the most important for society,” he added.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.