Imagine being able to translate your thoughts into written words without ever having to physically type or speak them aloud — well, this might not be too far off from reality, thanks to Alexander Huth, an assistant professor of neuroscience and computer science at the University of Texas at Austin. He has developed an AI language decoder that can translate thoughts into text; this latest development has been published in the journal Nature Neuroscience.
Huth and his team developed the AI language decoder by recording fMRI data from three patients who each listened to 16 hours of podcasts. The decoder works by taking the fMRI data and translating it back into sentences and for this, the team utilized GPT-1 from OpenAI to create the model — despite the fact that the decoder wasn’t perfect and could only translate broader thoughts and ideas, still, it managed to match the accuracy of the actual transcripts more closely than if things were left to pure chance.
This is indeed a significant breakthrough in brain-computer interfaces (BCI) that offers hope for the millions of people living with paralysis either caused by stroke, locked-in syndrome, or an injury and unlike BCI ventures like Neuralink or the Stanford BCI lab, the findings from the UT Austin researchers are non-invasive — which means surgery is not necessary to implant a chip in a patient’s skull.
Some limitations and privacy concerns
Still, Huth is quick to acknowledge that the technology is incredibly limited; the patient needs to be cooperative in order to properly decode someone’s thoughts and they can also easily disrupt it by silently counting numbers or thinking of random animals, among other things. The encoder and decoder also don’t work across all brains, it needs to be trained specifically for each individual person in order to work properly.
Technology like this does open the doors a part way to a potential future where it becomes sophisticated enough to create a sort of generalized brain decoder. At the same time, Huth concedes that there are extensive privacy concerns that might arise when it comes to what essentially amounts to a mind-reading robot, it is beholden on the policymakers and regulators to create effective guardrails for this technology before it becomes powerful enough to become a privacy crisis across society. This is a significant concern because policymakers aren’t the best at anticipating the dangers of emerging technology, so there’s little reason to think it’d be the same with BCIs.
Filed in AI (Artificial Intelligence) and ChatGPT.
. Read more about