AI from Meta can analyze your brain waves and “read” what you hear

⇧ [VIDÉO] You may also like this affiliate content (after ads)

Meta-researchers have developed a new artificial intelligence capable of analyzing a person’s brain waves to deduce the words they hear. This type of program could one day be used to communicate with dumb people.

As the researchers note in their preprint, deciphering language from brain activity is a long-awaited goal in both healthcare and neuroscience. Today, there are intracranial devices that learn from the brain’s responses to basic linguistic tasks and are able to efficiently decode interpretable features (eg, letters, words, spectrograms). However, these devices are quite invasive and generally not suitable for natural speech.

Therefore, Jean-Remy King and his colleagues at Meta have developed an AI capable of translating magnetic and electroencephalography recordings (which are non-invasive methods) into words. The technology is still in its infancy, but early results are encouraging: for each entry, the AI ​​predicted a list of 10 words, and 73% of the time, that list included the correct word; in 44% of cases, the first predicted word was correct. The next step may be to try to interpret the person’s thoughts.

Convert brain activity into words

To train their AI, King and his collaborators used publicly available brainwave datasets from 169 volunteers, collected while listening to recordings of people speaking naturally. These wave data, recorded using magneto- or electroencephalography (M/EEG), were segmented into three-second blocks; they were sent to the AI ​​along with the corresponding sound files – the goal was for the software to compare them to identify patterns.

Of the available data, 10% was reserved for the test phase. In other words, these brainwaves have never been explored by artificial intelligence before. And the program passed the test brilliantly: it was able to determine by brain waves which individual words from a list of 793 words each person was listening to at that time.

“The results show that our model can identify the corresponding speech segment from 3 MEG signals with an accuracy of 72.5% in the top 10 of 1594 individual segments (and 44% in the top 1),” the researchers elaborate. For EEG-type records, the AI ​​showed lower accuracy: it was able to predict a list of ten words containing the correct word 19.1% of the time out of 2604 different segments.

Meta has no specific business goals to date, but for the team, these results point to a promising avenue for decoding real-time natural language processing from non-invasive recordings of brain activity.

The ability to predict is still far from the capabilities of the human brain

Some experts remain skeptical about these characteristics, believing that this technology is currently very far from being accurate enough for real-world applications. However, in their opinion, the records of magnetoencephalography and electroencephalography will never be detailed enough to one day improve the accuracy of the prediction. The brain is indeed the seat of many processes that can interfere with listening-related brainwaves at any time.

King, however, remains confident, even as he admits that this AI in its current form is of little interest – determining what words a person hears at time t is indeed of limited use. On the other hand, this technology could lead to the development of a system capable of interpreting a person’s thoughts and therefore potentially allowing people who cannot speak to communicate again – a particularly ambitious goal given the complexity of the task.

Meta recently announced a long-term research partnership – with CEA NeuroSpin and INRIA – to study the human brain and, in particular, how it processes language. The goal is to collect the data needed to develop an AI capable of processing speech and text as efficiently as humans.

Several studies have already demonstrated that the brain is systematically organized according to a hierarchy remarkably similar to AI language patterns. However, certain areas of the brain anticipate words, as well as ideas about ahead of time, while most modern language models are trained to predict only the word that follows them. “Unlocking this long-term predictive capability could help improve modern AI language models,” the company notes in a blog post.

A. Defosses et al., arXiv.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.