“Summer” silently answers the paralyzed patient’s question “what is your favorite time of the year” displayed on the screen. Displaying these few seemingly simple words is the result of a technical feat published in the journal Nature Communications by researchers at the University of California, San Francisco (USA). Since a man cannot speak, it is thanks to a cerebral implant connected to a computer that he is able to express his thoughts in writing.
Electrodes fixing mental spelling in alpha-bravo mode.
A voluntary patient, deprived of any movements in the limbs and vocal apparatus as a result of cerebrovascular accident (CNS), was implanted with a chip with 128 electrodes. Placed opposite the region of the cerebral cortex responsible for the articulation of language, the chip relayed the detected brain activity to a computer to translate it into sentences on a screen. The principle is simple. Since in our brain the idea of how to do something activates the same areas as actually doing it, the subject simply had to speak in his head. Not really “telling” the truth, but saying each word letter by letter, using the NATO code that says “alpha bravo”. For example, when a patient thought “alpha” or “echo”, the computer wrote “A” or “E” respectively. To signal that he has finished speaking, the patient must simply imagine the movement of his hand – a signal that is easy to distinguish from others and which marks the end point.
94% success rate per character over 1000 words
With impressive results, since the letters displayed on the screen were 94% accurate with a vocabulary of 1154 words – enough for everyday conversations – while the previous version of these experiments barely exceeded 75% among 50 words. One of the keys to this impressive leap forward lies in the computer models that process brain signals, explains neuroscientist David Moses, the first author of this work. “A natural language model tells us which sequences of words and letters are more likely than others using statistical information from the English language.” Without this model, the error rate could be as high as 35% rather than 6% of the final result. “This means that we could correctly decode about two out of three letters of the brain signals,” explains the researcher. “This is a very noticeable and significant improvement.” Thus, this natural language model was able to predict which letters were most likely in the second, third, and up to the nth position. “Designing perfect decoders with brain signals is very difficult given the complexity of these signals,” explains David Moses, so using these models can “significantly improve” the result. And convert the result of neurodecoding “Thank you” into a clear “Thank you”.
While transcribing words may seem tedious, it should be remembered that many paralyzed patients, “including our participant himself,” as David Moses says, tend to be proficient users of spelling-based communication tools currently on the market. Based on eye movement or other residual motor capabilities, none of these widely used interfaces currently use brain implants. Thanks to this new technology, the man was able to express himself at a speed of about 7 words per minute (about 30 characters).
The speed is three times slower than another recently tested interface, this time based on the detection of brain signals from hand movements. While imagining writing, the patient sent signals, interpreted by the computer as handwritten sentences, at a rate of 90 characters per minute, which is comparable to the 115 words per minute spoken by healthy people on a smartphone. But this technique required an electrode implanted deep into the brain, a much more cumbersome operation than in this new work. “In general, invasive brain-computer interfaces that require implantation in the skull work better than non-invasive ones because they can directly record brain activity with better signal quality,” says David Moses.
A system capable of working with more than 9000 words
Another advantage of his device is that it is based directly on attempts to speak, an approach that may be more intuitive and natural than others based on, for example, writing, for many patients. In a model run after their work, the researchers have already shown that their system can decode sentences from an extended vocabulary of 9170 words with an error of just 8%! By comparison, 9,000 words are enough to be fluent in English, while in French we use an average of 5,000 words to be understood fluently. “We even think that future versions of our system will be able to use letter-by-letter spelling and even allow some very common words to be spoken directly,” David Moses foresees. Ultimately, each patient and their care team will need to weigh the advantages and disadvantages of each approach to decide which is best for them, he concludes.