HomeTech and GadgetsArtificial IntelligenceConversations With AI: Digital Avatars Give The Speechless The Ability To Speak

Conversations With AI: Digital Avatars Give The Speechless The Ability To Speak

Have you seen the movie Awakenings? It was released in 1990 and told the story of a neurologist who was working with people who were locked in. They were unable to communicate, often exhibiting only small sensory responses to things going on around them. They couldn’t move and couldn’t speak.

Thirty-four years later work being done by a team at the University of California – San Francisco (UCSF) using advanced brain mapping may change the lives of those locked in. The technology uses a neural prosthetic and AI. It captures signals from the brain related to speech and uses generative AI to speak on their behalf.

The UCSF team is using a deep-learning AI and captured brainwaves to produce speech from locked-in test subjects who have demonstrated an ability to speak 78 words per minute. Normal speech is usually double that. Based on the UCSF published results that appeared in the last August edition of the Journal Nature, the median word error rate was 25% after the AI had trained for two weeks using the data produced by the test subjects. Given longer, the word error rate was expected to decline.

What and how does this neuroprosthetic device work? It consists of an electrode array that plugs into a port on the scalp. The port accesses brainwave activity related to speech. The activity appears as squiggly lines but the data content of these transmissions when fed through 253 channels to a computer using a generative AI tool not only creates words but also speech sounds and even avatar facial movements. The words can be printed to a screen or through voice synthesis be spoken aloud. The AI software is similar to the Large Language Models that have made news headlines for the last two years.

UCSF chose two locked-in test subjects and fitted them with the neuroprosthetic. Both were 15-year survivors of brainstem strokes. They could not speak and were nearly completely paralyzed. Having survived strokes it was an unknown whether their brain speech centres were still working or had atrophied.

Dr. Edward Chang, the Chair at the Department of Neurological Surgery at UCSF when interviewed by the JAMA Network described the reawakening of speech in the two locked-in subjects. He noted, “What we’ve learned now is there’s no question that those parts of the brain that haven’t been used for quite some time actually are still there, kind of like riding a bicycle in some sense, though it does take training.”

The UCSF team asked their test subjects to read text being viewed on a screen. Remember, these subjects appeared to be dramatically limited in displaying any kind of recognition of what was being requested. Nonetheless, they were told to try formulating and saying the words on the screen.

Dr. Chang says it took two months for the test subjects to produce enough responses that the AI could analyze. Two weeks later, the AI was up to speed. The test subjects could speak again through an onscreen avatar as seen in the image at the top of this article.

For multilingual speakers who have suffered a debilitating stroke that renders them speechless in many languages, the goal is to get the neuroprosthetic to distinguish between the languages from neural activity patterns and then pick the appropriate one for the avatar to speak.

The current neuroprosthetic is wired to the brain through a connecting port on the user’s scalp. The goal, however, is to fully embed the device and make connectivity wireless for speech and recharging.

One can imagine a future robot companion replacing the avatar to become more than just the voice of a locked-in person, providing many other valuable services.

The work at UCSF isn’t the only research being done using neuroprostheses. Elon Musk launched Neuralink intending to produce implantable brain-computer interfaces (BCIs) to address loss of motor function and to enhance human performance by mating us with AIs.

Current neuroprostheses can restore motor function, control prosthetic limbs and give wearers operational control over exoskeletons. Retinal and cochlear implants are two neuroprostheses used today to restore vision and hearing.

Deep Brain Stimulation (DBS) devices are being used to send signals into parts of the brain to treat Parkinson’s Disease, idiopathic tremors (which I have), and dystonia a condition that causes involuntary muscle contractions and jerking.

Cortical prostheses are helping the paralyzed directly interact with their cerebral cortex to control robotic limbs. Peripheral nerve prostheses restore motor or sensory function by connecting to nerves outside the central nervous system including the brain and spinal cord.

lenrosen4
lenrosen4https://www.21stcentech.com
Len Rosen lives in Oakville, Ontario, Canada. He is a former management consultant who worked with high-tech and telecommunications companies. In retirement, he has returned to a childhood passion to explore advances in science and technology. More...

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Popular

Recent Comments

Verified by ExactMetrics