Science

Why Concern About AI Consciousness Isn’t the Most Urgent

The case shocked the AI ​​community in early June: Blake Lemoine, a Google engineer, told the Washington Post that LaMDA’s language recognition model is likely self-aware. Very quickly, experts in the field – and Google itself – spoke out against this assumption. LaMDA is a system designed to mimic conversations as realistically as possible, but that doesn’t mean it understands what it’s talking about. On the contrary, some scholars argue, the ever-evolving controversy about AI consciousness detracts from the questions these technologies pose in a more relevant fashion.

The old obsession with robot intelligence… A marketing argument?

The hypothesis of our technology awareness is nothing new—it has been in our imagination since Mary Shelley’s Frankenstein and the rising success of science fiction. Emulation of human thought is also at the heart of the Turing test, an experiment designed to assess whether a machine manages to impersonate a human to an outside observer. One of the fathers of modern computing, John von Neumann, for his part, laid the foundations for modern computer architectures by modeling them on the functioning of the brain.

“Even today, many people are funding research and working in this direction,” says LIMSI / CNRS artificial intelligence professor Lawrence Deviliers. And quoting Elon Musk, founder of Open AI, Yann LeCun, head of AI research at Meta, when he mentions the possibility that some machines experience emotions, Blaise Aguera y Arcas, VP of Google, when he describes LaMDA as artificial cerebral cortex… “The fact that an engineer declares LaMDA to be conscious has a marketing interest,” the researcher explains. This puts Google in a competitive environment. »

When empathy deceives us

In fact, LaMDA is neither the first robot capable of generating empathy, nor the first algorithmic model capable of producing authentic written conversation. In the 1960s, computer scientist Joseph Weizenbaum, for example, created Eliza, a program that mimics the responses of a psychotherapist. The machine worked so well that people leaked intimate details into it. We now call the “Eliza effect” the human tendency to attribute more capability to a technical system than it can possibly have. Closer to LaMDA, the broad GPT-3 language recognition model, available since 2020, is also capable of authentically impersonating a journalist, a squirrel, or a resurrected William Shakespeare.

But the fact that users, experts or not, can take these results to heart is what is frustrating a growing number of scientists. According to linguist Emily Bender, this is an abuse of our empathic abilities, the very ones that make us project a semblance of humanity onto inanimate objects. LaMDA, recalls Lawrence Devillers, is “fundamentally inhuman”: the model is trained on 1.560 billion words, it has no body, no history, it gives out its answers according to probabilistic calculations …

Artificial intelligence is a problem of social justice

Shortly before the Lemoine case, philosophy doctoral student Giada Pistilli said she would no longer talk about the possible consciousness of machines: this diverts attention from already existing ethical and social problems. In this, she follows the lines of Timnit Gebru and Margaret Mitchell, two AI ethicists fired by Google… for pointing out the social and environmental risks associated with broad language patterns. “It’s a matter of power,” analyzes Razie Buse Çetin, an independent AI policy researcher. Are we highlighting and funding the search for the machine we dream of making conscious, or rather attempts to correct the social, sexist, or racist biases of algorithms already present in our daily lives? »

The ethical problems of algorithms that surround us on a daily basis are countless: what kind of data are they learning from? How do we correct their mistakes? What happens to the texts that users send to chatbots built on models like LaMDA? In the United States, an association for listening to suicidal people used the responses from these highly vulnerable people to teach commercial technology. “It is acceptable? We need to think about how data is used today, about the value of our consent in the face of algorithms that we sometimes don’t even know exist, look at their cumulative effect, since algorithms are already widely used in education, recruitment, credit ratings…”

Regulation and education

The topic of AI awareness precludes further discussions “about the technical limitations of these technologies, the discrimination they cause, their impact on the environment, the bias that is present in the data,” lists Tipaine Villar, a lecturer at Telecom Paris. Behind the scenes, this debate has been troubling the scientific and legislative community for several years now, because, according to the researcher, “the issues are similar to what happened with social networks. “Big tech companies have been saying for a long time that they don’t need to be regulated, that they can do it: “The result, fifteen years later, we tell ourselves that we need a politician’s and a citizen’s perspective. »

What then should be done so that algorithms do not harm society? Explainability and transparency of models are two of the discussed axes, in particular for ensuring European regulation of AI. “These are good leads,” Tiefen Wiard continues, “but what should it look like? What is a good explanation? What are the possible points of appeal if this shows that discrimination has taken place? At the moment there is no fixed answer.

Another important topic, emphasizes Lawrence Devillers, is education. “You have to learn how to solve problems with these sociotechnical objects very early,” teach code, explain to people how algorithms work, help develop skills … Otherwise, when faced with machines designed to imitate people, “users risk becoming an object of manipulation. Education, the computer scientist continues, will be the best way to allow “everyone to think about how to adapt to these advanced technologies, to the frictions that we want to introduce there, to their brakes, to their acceptability. “And insist on building an ethical ecosystem, “where manufacturers are not responsible for their own regulation. »

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.