
Share Source: Neuralink
The Neuralink brain chip allowed a person with amyotrophic lateral sclerosis to convert brain signals into words, which are then spoken using a computer program.
Main theses:
- The Neuralink brain chip allows a person with amyotrophic lateral sclerosis to convert brain signals into words using the power of thought.
- The Neuralink N1 implant allowed the patient to speak using only brain signals, sending them to the muscles of the mouth, tongue, and larynx.
Neuralink allowed a patient to convert brain signals into words
Elon Musk's company released a video demonstrating the capabilities of the N1 chip, which Kenneth Shock was implanted with in January 2026.
He suffers from amyotrophic lateral sclerosis (ALS), a neurodegenerative disease that robs people of their ability to walk and talk. While previously this implant allowed them to control a computer mouse or a robotic arm, it now allows them to speak with the power of their thoughts.
Share
Neuralink says the implant reads certain brain signals and matches them with the words it wants to say.
There are certain areas of the brain that are activated and generate signals that go to the muscles of the mouth, tongue, and larynx.
The video showed how the implant, paired with the lost software, records brain signals and matches them to “phonemes” – the smallest units of speech sound. Neuralink engineer Skyler Granatier explains that Shock was first given several sentences to try to pronounce in order to match the neural impulses to specific words.
In the first stage, he spoke sample sentences out loud, and in the second, he spoke them silently, moving only his lips. In the third stage, the Neuralink software was able to recognize Shock's speech without any mouth movements.
The goal is that he can simply intend to move his mouth, and our neurocomputer interface (NCI) will decode his speech.
Share
This test is part of the “VOICE clinical trial,” and the new technology will only be widely available for a few years. The decoding process, in particular, needs to be refined, particularly to increase speed, as it can currently take several minutes to read the signals.
We're going to continue to improve the quality of the sensors, the number of sensors. We want to create a system that transmits the signal directly from the brain to the voice in real time.