
After a year-long hiatus, the annual AI Debate, hosted by Montreal.AI and NYU Professor Emeritus and AI expert Gary Marcus, returned last Friday exclusively in a virtual format, as it did in 2020.
This year’s debate, titled “AI Debate 3: The AI Debate,” focused on the concept of artificial general intelligence, that is, the notion of a machine capable of integrating multiple human-like thinking abilities.
While previous debates brought together a number of AI scientists, last Friday’s meeting brought together 16 participants from a much wider range of professionals, including American linguist and activist Noam Chomsky.
What do AI models tell us about language and thinking?
To open up the discussion, Gary Marcus embarked on a humorous recap of “a very brief history of AI.” Contrary to the enthusiasm of the decade following ImageNet’s historic success, he says, the “promise” of machines to do different things has not borne fruit. He cited his own New Yorker article that cast a shadow over the issue. However, the AI specialist believes that “poking fun at naysayers has become a hobby” and that his and others’ criticisms have taken a back seat to the enthusiasm for AI.
However, in late 2022, “the narrative started to change,” he noted. He cites headlines about the delayed release of Apple’s self-driving car and negative comments from Meta’s Yann LeCun as examples. But so is the critical essay “Artificial Intelligence Meets Natural Stupidity” by the late Drew McDermott of MIT’s Artificial Intelligence Laboratory, who died this year.
After this introduction, Noam Chomsky didn’t take it lightly either, insisting on what can’t be achieved with current approaches to AI.
Systems “tell us nothing about […] what does it mean to be human
“Important musings are circulating in the media about the miraculous achievements of TGP-3 and its descendants, most recently ChatGPT, and comparable achievements in other fields, and their implications for fundamental questions about human nature,” says the renowned linguist. But “besides usefulness, what do we learn from these approaches about cognition, thinking, in particular, about language, an important component of human cognition”? – he asked. “Many flaws have been found in large language models,” and “by design, the system does not distinguish between possible and impossible languages,” he notes.
And to continue: “The more systems improve, the deeper the failure becomes.” […] They tell us nothing about language and thought, about cognition in general, or about what it means to be human. We understand this very well in other areas. No one would pay attention to the theory of elementary particles, which at least did not distinguish between the possible and the impossible. […] Is there anything worthwhile in, say, GPT-3 or more complex systems like this? Finding them is quite difficult. One might wonder what is the point? Could there be another AI, one that was the goal of pioneers in the discipline such as Turing, Newell, and Simon, as well as Minsky, who saw AI as part of an emerging cognitive science, an AI that would help understand thought, language, cognition, and other areas? that would help answer questions that have prevailed for millennia (…)? »
According to Noam Chomsky, today’s impressive language models such as ChatGPT “tell us nothing about language and thinking, about cognition in general, or about what it means to be human.” According to him, to answer the Delphic Oracle’s question about who we are, another type of AI is needed. Image: Montreal.ai and Gary Markus.
Picking up on Noam Chomsky, Gary Marcus mentioned the four unresolved elements of cognition: abstraction, reasoning, compositionality, and facticity. He then went on to show clear examples of programs like GPT-3 failing on every count. For example, when it comes to “factuality”, deep learning programs don’t store models of the world from which to draw conclusions.
Central place of language
Gary Marcus interviewed Noam Chomsky about the concept of “innateness” found in his writings, that is, the idea that something is “built into” the human mind. Should AI pay more attention to innateness?
In the words of Noam Chomsky, “any form of growth and development from an initial state to a stable state involves three factors.” First, it is the internal structure of the initial state. The second concerns “incoming data”. Meanwhile, the third covers the “general laws of nature.” “It turns out that innate structure plays an exceptional role in all areas that we discover,” he insists.
Things that are seen as a paradigmatic example of learning, such as language acquisition, “once you start taking it apart, you find that the data has almost no effect; the pattern of phonological possibilities has a huge limiting effect on the types of sounds even an infant will hear. […]. The concepts are very rich, almost no evidence is required to acquire them […] “.
According to the linguist, there are practically no genetic differences between people. He explains that language has not changed since the advent of man, as evidenced by the fact that any child in any culture is capable of mastering a language. Thus, Noam Chomsky suggests that language is at the heart of AI in order to understand what makes humans as unique as a species.
Source: .com