A brain-computer speech interface implanted in a patient with amyotrophic lateral sclerosis (ALS) was brought up to speed quickly and demonstrated impressive accuracy, according to a new study in the New England Journal of Medicine (NEJM).
The study — “An Accurate and Rapidly Calibrating Speech Neuroprosthesis” — was published by the NEJM on Aug. 14.
In total, 18 researchers were listed in the study, with Nicholas Scott Card, Ph.D., and David M. Brandman, M.D., Ph.D., both from the University of California, Davis, as lead researchers.
The journal article noted that while brain-computer interfaces can make it possible for people with paralysis to communicate “by transforming cortical activity associated with attempted speech into text on a computer screen,” those systems have also had their limitations.
“Communication with brain-computer interfaces has been restricted by extensive training requirements and limited accuracy,” researchers noted.
Greater accuracy, faster learning curve
The brain-computer interface used in this study was implanted in 45-year-old man with ALS who had “tetraparesis and severe dysarthria” — muscle weakness in all four extremities, as well as muscle weakness related to speaking — that made his speech so difficult to understand that only his most frequent caregiver could comprehend it.
At the time of the study, the patient was five years post diagnosis and relied on others to operate his power chair and carry out his activities of daily living, such as dressing and feeding.
The patient “underwent surgical implantation of four microelectrode arrays into his left ventral precentral gyrus five years after the onset of the illness; these arrays recorded neural activity from 256 intracortical electrodes,” the journal article said. “We report the results of decoding his cortical neural activity as he attempted to speak in both prompted and unstructured conversational contexts. Decoded words were displayed on a screen and then vocalized with the use of text-to-speech software designed to sound like his pre-ALS voice.”
The surgical implantation was performed in July 2023, and the patient was discharged three days later with no serious adverse effects. In August 2023, 25 days after the implantation, researchers began collecting data.
“On the first day of use, the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary,” researchers reported. “Calibration of the neuroprosthesis required 30 minutes of cortical recordings while the participant attempted to speak, followed by subsequent processing. On the second day, after 1.4 additional hours of system training, the neuroprosthesis achieved 90.2% accuracy using a 125,000-word vocabulary.”
After additional training, the neuroprosthesis “sustained 97.5% accuracy over a period of 8.4 months after surgical implantation, and the participant used it to communicate in self-paced conversations at a rate of approximately 32 words per minute for more than 248 cumulative hours.”
Researchers’ conclusion: “In a person with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore conversational communication after brief training,” the journal article said.
Affirming the power of speech
Researchers also noted how important it is for patients with dysarthria to still be able to efficiently speak.
“Communication is a priority for people with dysarthria from neurologic disorders such as stroke and amyotrophic lateral sclerosis,” they said. “People with diseases that impair communication have an increased risk of isolation, depression, and decreased quality of life; losing communication may determine whether a person will pursue or withdraw life-sustaining care in advanced ALS. Although augmentative and assistive communication technologies such as eye trackers (also called eye-gaze–tracking devices) or head trackers are available, they have low information-transfer rates and become increasingly difficult to use as patients lose voluntary muscle control.”
In fact, researchers added that using the system for the first time “elicited tears of joy from the participant and his family, as the words he was trying to say appeared correctly on screen.”
“We confirmed that this affective display was concordant with his emotional state,” they explained, “and not a pseudobulbar phenomenon.”