(TNS) — Facebook is developing virtual reality systems in Pittsburgh, Pa., that it hopes will someday permit people located in separate parts of the world to

What is Facebook brain typing?

Facebook has released an update on its ambitious plans for a brain-reading computer interface, thanks to a team of Facebook Reality Labs-backed scientists at the University of California, San Francisco. The UCSF researchers just published the results of an experiment in decoding people’s speech using implanted electrodes. Their work demonstrates a method of quickly “reading” whole words and phrases from the brain — getting Facebook slightly closer to its dream of a noninvasive thought-typing system.

“People can already type with brain-computer interfaces, but those systems often ask them to spell out individual words with a virtual keyboard. In this experiment, which was published in Nature Communications today, subjects listened to multiple-choice questions and spoke the answers out loud. An electrode array recorded activity in parts of the brain associated with understanding and producing speech, looking for patterns that matched with specific words and phrases in real time.” See in Facebook just published an update on its futuristic brain-typing project

Facebook envisions a future in which people will be able to type out words and send messages using only their brains.

“The idea might seem like the stuff of science fiction, but the social media giant said Tuesday that it’s moving closer to making the moonshot project a reality thanks to new research. That could help Facebook build wearables such as augmented reality glasses, letting people interact with each other in real life without having to pick up a smartphone.

“The promise of AR lies in its ability to seamlessly connect people to the world that surrounds them — and to each other. Rather than looking down at a phone screen or breaking out a laptop, we can maintain eye contact and retrieve useful information and context without ever missing a beat,” Facebook said in a blog post.

“The social network first announced that its research lab, Building 8, was working on a computer-brain interface in 2017, during the social network’s F8 developer conference. Regina Dugan, who was leading the effort but left the company after more than a year, said during a speech that Facebook wanted to create a silent speech system that can type 100 words per minute straight from your brain. That’d be five times faster than a person could type from a phone.

“Researchers, including at Stanford University, have already found a way to do this with patients who are paralyzed, but it requires surgery in which electrodes are implanted into the brain. Facebook, though, hopes to build a wearable that isn’t invasive. ” See in Facebook moonshot project wants you to type with your mind

“Participants in UCSF’s study are people with epilepsy who had a small patch of electrodes implanted on their brains in hopes of figuring out where their epileptic seizures originated; they volunteered to help with the Facebook-related study while in the hospital.

“For the work related to Facebook, researchers had participants listen to questions while tracking their brain activity. Machine-learning algorithms eventually determined how to spot when participants were answering a question and which one of 24 answers they were choosing.

“Translating what happens inside your brain into words is hard and doing it immediately is even harder. The speech decoding algorithms the researchers used were accurate up to 61% of the time at figuring out which of two dozen standard responses a participant spoke, right after the person was done talking.

“This may not sound all that impressive, but Edward Chang, a neurosurgeon and professor of neurosurgery at UCSF who coauthored the study released this week believes it’s a “really important result” that could help people who have lost the ability to speak.

Facebook’s Chevillet, who previously worked at Johns Hopkins University as an adjunct professor of neuroscience, said the UCSF work is an “important yet somewhat expected milestone,” as the interpretation of brain signals tends to be done offline rather than in real time. Facebook, he said, “has no interest in making any sort of medical device; what it wants is to understand the neural signals needed to create a silent speech interface.”

“Chang called the work in the paper published this week a “proof of concept.” Another Facebook-sponsored brain-computer-interface project he recently began in his lab is much lengthier: Chang will spend a year working with a single patient (a male who can no longer speak), tracking his brain activity with the same kind of electrode array used on the epilepsy patients, in hopes of restoring some of his communication abilities. “We’ve got a tall order ahead of us to figure out how to make that work,” Chang said. “If it does, it could one day help a range of people, from those who have lost the ability to speak due to various brain-related injuries, to people who simply want to control a computer or send a message with their mind.” “ See in Facebook gets closer to letting you type with your mind

FACEBOOK: “At F8 2017, we announced our brain-computer interface (BCI) program, outlining our goal to build a non-invasive, wearable device that lets people type by simply imagining themselves talking. As one part of that work, we’ve been supporting a team of researchers at University of California, San Francisco (UCSF) who are working to help patients with neurological damage speak again by detecting intended speech from brain activity in real time as part of a series of studies. UCSF works directly with all research participants, and all data remains onsite at UCSF. Today, the UCSF team is sharing some of its findings in a Nature Communications article, which provides insight into some of the work they’ve done so far — and how far we have left to go to achieve fully non-invasive BCI as a potential input solution for AR glasses.

“Recommended Reading

“This is the third entry in our “Inside Facebook Reality Labs” blog series. In the first post, we go behind the scenes at FRL Pittsburgh, where Yaser Sheikh and his team are building state-of-the-art lifelike avatars. The second entry explored haptic gloves for virtual reality, and we’ll have more to share throughout the year. In the meantime, check out FRL Chief Scientist Michael Abrash’s series introduction to learn more.” See in Imagining a new interface: Hands-free communication without saying a word