Implant turns brain signals into synthesized speech

People with neurological conditions who lose the ability to speak can still send the brain signals used for speech (such as the lips, jaw and larynx), and UCSF researchers might just use that knowledge to bring voices back. They’ve crafted a brain machine interface that can turn those brain signals into mostly recognizable speech. Instead of trying to read thoughts, the machine learning technology picks up on individual nerve commands and translates those to a virtual vocal tract that approximates the intended output.

The results aren’t flawless. Although the system accurately captures the distinctive sound of someone’s voice and is frequently easy to understand, there are times when the synthesizer produces garbled words.