
PAttie Mays is professor of the media arts and sciences program at MIT Medialab in Cambridge, near Boston. As a lecturer at the Center for Neurobiological Engineering, she is interested in how computer-brain interfaces improve memory, attention, learning, decision making, and sleep. An exciting sector as advances in brain imaging help us learn more about artificial intelligence. We asked the woman who leads the Fluid Interfaces research group at MIT’s Media Lab what she thinks of GPT-4, the latest language model from Open AI, the California company behind ChatGPT.
Among other things, this Swiss army knife allows you to code a copy of the Pong video game in 60 seconds, write a pleading for a lawyer, or even advise a sommelier and an investor. The program will even be able to beat 90% of lawyers in the test to become a lawyer. What is the reaction of a researcher with a PhD in artificial intelligence from the Free University of Brussels in Belgium to these puzzling advances? “I think it would be more beneficial to build systems to help people become smarter than to try to build machines that can match, outdo, and replace us. ” Interview.
READ ALSO“ChatGPT is a parrot that responds randomly” Le Point: Are we far from general artificial intelligence, that is, capable of producing appropriate results in all cognitive tasks specific to humans?
Patty Mays Yes. The current form of artificial intelligence has nothing to do with true intelligence. This is not an approach that will ever lead to general intelligence, because these language models have no understanding of the world, they cannot reason, etc. However, this does not mean that these systems are useless. With these technologies, you can build a lot of interesting tools to move on to summarizing text, assisting with writing and editing, developing storyboards, and learning languages.
Are you impressed with the performance of the ChatGPT chatbot?
ChatGPT is an impressive tool, but ultimately superficial. He lacks a real model of the world that he could use to reason and answer questions. I think that 95% reliable technology is actually more dangerous than 75% correct technology, because we start to rely on it even if it is not reliable. It is important to always remember that we are dealing with a parrot and not imagining these systems as human beings.READ ALSO Meet the robot that bothers Google (and annoys Elon Musk)
What does GPT-4 provide, the latest OpenAI language model that currently makes headers at the technical level?
Linguistic abilities seem to have improved a lot, but they are still based on the same method of statistical word prediction. This method is limited in that the system has no real model or understanding of the world. Open AI still says that it “hallucinates” from time to time and gives out false information. But in addition to her improved language skills, she also has multi-modal abilities, meaning she can look at pictures and tell you what’s in them. I’m curious to know how deeply these two abilities are related to each other.
READ ALSOFrom ChatGPT to Bard, Amazing Robot ‘Hallucinations’ That Respond Witty
Some people think that AI can get a law degree, what jobs do you think are at risk and when?
It will definitely change a lot of professions, but I think it’s more likely that a lawyer, doctor or whatever will use this system to become much more efficient than a job replacement system, especially for such important professions. However, for tasks with a low level of risk and low importance, where the cost of a random error is not so high, tasks can be replaced. For example, customer service on sites where you can buy something, etc.
READ ALSOArtificial Intelligence: These Frenchmen Challenge ChatGPT
Should we set ethical rules for AI development?
Yes. I think that the choice of directions for the development and deployment of artificial intelligence should not be left to engineers and startups only in Silicon Valley. Artificial intelligence will have a profound impact on our society and our world. This will affect all of us. Why should engineers decide how we want to change our society? Why should engineers and entrepreneurs decide what kind of future we want to create and live in? AI developers often cite economic reasons to justify development, but some economists argue that it would be far more beneficial for the economy to strive to increase human capabilities rather than replace workers with technology. This is explained by researcher Eric Brynjolfson, professor at Stanford, in his article “The Turing Trap”. (1)
READ ALSOWhen artificial intelligence plays with the news, there is also the risk of distorting reality…
Economics aside, LLMs and chatbots will degrade the social and political landscape. The “fake news” problem will get worse, the erosion of truth will continue, and political polarization will intensify. LLMs and chatbots also risk weakening our social fabric as people rely on virtual friends for conversations and intimacy rather than real people. I think we should slow down the development of AI and think about the social implications of what we are creating, rather than building AI systems just “because we can” without thinking about the consequences.
(1) In this article, the researcher explains that as machines become better substitutes for human labor, workers lose their economic and political bargaining power and become increasingly dependent on those who control technology. This is what the researcher calls the Turing trap. In contrast, when AI focuses on empowering human capabilities rather than emulating them, humans retain the right to claim a share of the value created.