Facebook’s mind-reading plans just took another step forward

IT news

Facebook Reality Labs, the social media platform’s department dedicated to “inventing the future” – or, more prosaically, to developing AR and VR headsets – has hailed a new development in brain-computer interface (BCI) research, which the tech company is determined to put to good use.

Researchers from UC San Francisco (UCSF), who are sponsored and supported by Facebook, have effectively set a new benchmark for decoding speech directly from brain activity. For the Silicon Valley giant, UCSF’s research team’s findings will be key to building the next generation of immersive personal computing platforms that Facebook believes are set to replace the smartphone.

Multiple studies have already succeeded in understanding words and sentences based on human brain signals picked up by electrodes, but the accuracy and speed of the decoding process are still pretty low; UCSF’s researchers said that error rates average 60% for 100-words vocabularies.

Using advanced machine learning for speech recognition and language translation, however, the team managed to translate neural activity into English sentences with an error rate of only 3% for vocabularies of up to 300 words. The new milestone follows up on a previous publication from the same team last summer, which detailed how the researchers successfully decoded speech directly from the brain in real-time.

SEE: Executive’s guide to the business value of VR and AR (free ebook)

Facebook has made no secret of its interest in a technology that accurately detects and identifies the brain’s attempts to communicate, and at the speed of natural speech. And with good reason: this type of sophisticated BCI lends itself well to the social media platform’s ambition to develop AR and VR headsets that will let users interact hands-free with both their physical surroundings and a virtual environment.

“New research helps illuminate the path forward in our mission to develop a non-invasive silent speech interface for the next computing platform,” said the social media giant as it announced the new results from UCSF. “We hope UCSF’s work will inform our development of the decoding algorithms and technical specifications needed for a fully non-invasive, wearable device.”

Research into connecting human brains to computers has been fully integrated with Facebook’s Reality Labs work for a number of years now. At the 2017 F8 developer conference, the company announced a BCI programme with the end-goal of building a wearable device that would let people type by simply imagining themselves talking.

Facebook said at the time that it was working on a silent-speech system that would one day be capable of typing 100 words per minute, “straight from your brain” – about five times faster than we type on our smartphones today.

SEE: Facebook is trying to build AR glasses that just ‘melt away’, using this cutting-edge tech

That is not to say, however, that the perfect mesh between reality and virtuality that the Reality Labs is working towards is anywhere near in the future. When Facebook unveiled its vision for AR, along with its commitment to BCI back in 2017, the tech giant presented the move as a “decade-long investment”. Three years later, despite the promising new results from the world’s top researchers at UCSF, Facebook insisted that “the future is still a long way off.”

And even when the company reaches its objective, there is no certainty that users will be keen to wear a headset made in Silicon Valley to scan the depths of their brain. Facebook, for its part, compared the process to taking many photos on your phone, and only sharing some of them: “Similarly, you have many thoughts and choose to share only some of them.” It remains to be seen whether the pitch will convince the wider public.

Previous Post
Coronavirus: How your PC’s spare computing power could help discover potential COVID-19 treatments
Next Post
Amazon’s Detective will help you investigate your cloud computing security mysteries

Related Posts

No results found.

Menu