In addition to doing tasks in response to spoken orders, AI can also detect and understand neural activity. In a recent study, researchers from Stanford University demonstrated that a 52-year-old woman who had been paralyzed after a stroke could comprehend screen text by understanding her own thoughts.
To put this experiment to the test, researchers implanted microelectrode arrays into her brain to record neurological impulses, which artificial intelligence (AI) subsequently interpreted as speech.Incorporating more ALS patients, the research demonstrated the efficacy of brain-computer interfaces (BCIs).
While earlier approaches to understanding the disease relied on patients speaking, BCIs powered by AI can identify the specific patterns of brain signals linked to speech.
Researchers at Stanford University were able to decipher “inner speech” with an accuracy of up to 74% in organized tasks by only recording mentally generated phrases. These systems convert neural impulses into written or audible words, including non-verbal cues like rhythm and tone.
Thanks to advancements in AI, brain scans may now be used to recreate not just speech but also visuals and sounds. Researchers in Israel and Japan used functional magnetic resonance imaging (fMRI) and artificial intelligence (AI) to create images.
Researchers identified which musical compositions their respondents had listened to by analyzing listening data. Medical and human-computer interaction fields may benefit from the novel methodologies developed from this study, which provide light on how humans view the world.
Brain chips will soon be available for purchase, according to specialists like Maitreyee Wairagkar of the University of California, Davis, who note that firms like Neuralink are driving their development. The ability to transfer data via direct real-time connections is a direct outcome of microelectrode array technology, which will allow AI systems to manage greater numbers of neurones.
In an effort to help stroke victims and others with speech problems talk more normally, researchers are looking at several brain regions to determine how individuals with damaged brain portions generate inner speech.
Disclaimer:
This article is for informational and educational purposes only. The research discussed is based on emerging scientific studies and experimental technologies. It does not constitute medical advice, diagnosis, or treatment. Readers should consult qualified healthcare professionals for medical concerns. The development of brain-computer interface technologies is ongoing, and commercial availability may vary.