A person with paralysis connected to the interface-computer interface system
Lisa e Howard / Maitreyee Wairagigar et al. 2025
A person who has lost the ability to speak can now give conversations with real and singing through a brain-controlled voice.
Brain-brain interface reads neural male activity by electrodes targeted by his BRAINS And then immediately made the words sounds showing his desired pitch, intonation and weight.
“This is different from this kind for instantaneous voice synthesis – within 25 milliseconds,” as Sergey Stavisky At the University of California, Davis.
Technology must be improved to make the language easier to understand, as Maitreyee Wairagigaralso at UC Davis. But the man who lost the ability to speak because of amyotrophic lateral sclerosis, it still said that it was “happy” and like that was his true voice, according to Wairagiwar.
Neuroprostheshessheshesh using brain brain interfaces already exist, but it generally takes a few seconds to change brain activity. That makes natural conversations, while humans do not interfere with, clarify or respond in real time, as Stavisky. “It’s like talking on the phone with a bad connection.”
In speaking of synthesise more realistic, Wairagkar, Staviskar planted 256 partners in the human brain that helps speak the facial muscles. After all, researchers showed thousands of sentences on a screen and asked him to try to tell them aloud, sometimes there were specific intonations, while keeping his brain activity.
“The idea is that, for example, can you say, ‘How are you today?’ or ‘How are you TODAY? “, And that changed the semantics of the passage,” Stavisky said. “That makes for a more than the more natural exchange – and a big step ahead versus previous systems.”
Next, the team fed that the data of a artificial intelligence The model trained to accompany specific neural activity patterns with words and inflections that the person attempted to express. The machine becomes speech based on brain signals, producing a voice showing what he wanted to say and how he said it.
Researchers still train AI in voice recordings from before the male state continues, using the voice cloning technology to make the stinthetic voice sound like himself.
In another part of the experiment, researchers try to sing simple melodies using different pitches. Their model decorates his preferred pitch in real time and then changed the voice of singing it.
He also used the system to speak not prompted and able to do sounds like “Hmm”, “Eww” or Wairagar.
“He is a very good articulate and intelligent person,” said the team member David Firefighteralso at UC Davis. “He’s gone unpacked and didn’t say to keep working full-time and with meaningful conversations.”
Topics: