This article is taken from the monthly journal Sciences et Avenir – La Recherche #905-906 July-August 2022.
“Artificial intelligence is not intelligence,” Luc Julia, Renault’s scientific director and co-inventor of the voice assistant Siri, confirmed in Science et Avenir in 2019. The formula testifies to an observation shared by many researchers: human ability.
Incapable of generalizing, simulating common sense or intuition
Autonomous omnipotent machines remain a fantasy. Because, despite the impressive progress of the early 2010s, the most fashionable technologies (neural networks) are based on learning and statistical calculations. However, these methods require a lot of examples and data (images, words, sounds, situations), while humans only need one or two. This AI is also unable to generalize, to model common sense or intuition.
“Unblock” the problem
“Take concepts like stalking, avoiding, encountering… or jealousy. A child understands one, two, three times without learning the definition,” says Jean-Louis Desalles, computer science teacher at Télécom Paris. “But we never we test neural networks. They operate with objects, not relationships.” However, he stipulates that these limitations are precisely due to the fact that most of the research is focused on neural networks, to the detriment of other approaches focused on understanding the mechanisms of human intelligence. “If only 1% of neural network research effort was spent on this type of work, we would have sorted out the problem much faster.”