The Future of Computing Interfaces: Decoding Brain Activity

The computing interfaces of the not-too-distant future might move beyond touchscreens and keyboards — even past eyes and hand gestures, to the inside of our own minds. Society is not quite there yet, but we are moving closer. Researchers at Meta Platforms, Inc., parent of Facebook, Instagram, WhatsApp and Oculus VR, today announced Image Decoder, a new deep learning application based on Meta’s open source foundation model DINOv2 that translates brain activity into highly accurate images of what the subject is looking at or thinking of nearly in realtime.

Decoding Brain Activity with Image Decoder

In other words, if a Meta researcher was sitting in a room and blocked from viewing the subject, even if the subject was on the other side of the world, the Image Decoder would allow the Meta researcher to see what the subject was looking at or imagining, based on their brain activity — provided the subject was at a neuroimaging facility and undergoing scanning from an MEG machine.

“Overall, our findings outline a promising avenue for real-time decoding of visual representations in the lab and in the clinic,”

– Meta Researchers

The researchers, who work at the Facebook Artificial Intelligence Research lab (FAIR) and PSL University in Paris, describe their work and the Image Decoder system in more detail in a new paper. Meta’s long-term research initiative aims to understand the foundations of human intelligence, identify its similarities as well as differences compared to current machine learning algorithms, and ultimately help build AI systems with the potential to learn and reason like humans.

The Technology Behind Image Decoder

In their paper, Meta’s researchers describe the technology underpinning Image Decoder. It combines two, hitherto, largely disparate fields: machine learning — specifically deep learning, and magnetoencephalogphy (MEG).

  • Machine learning: A computer learns by analyzing labeled data and then inspecting new data to correctly label it.
  • Magnetoencephalogphy (MEG): A system that measures and records brain activity non-invasively using instruments that pick up on the tiny changes in the brain’s magnetic fields as a person thinks.

The Meta Researchers trained a deep learning algorithm on 63,000 prior MEG results from four patients. They used the DINOv2 self-supervised learning model, which was trained on scenery from forests of North America. By comparing the MEG data to the actual source image, the algorithm learned to decipher specific shapes and colors represented in the brain.

While the Image Decoder system is not perfect, it has achieved accuracy levels of 70% in its highest performing cases, which is seven times better than existing methods. It has successfully retrieved images of objects like broccoli, caterpillars, and audio speaker cabinets.

“Most notably… the necessity to preserve mental privacy.”

– Meta Researchers

However, the researchers acknowledge several ethical considerations associated with this technology, particularly the need to maintain mental privacy. They also mention possible technological limitations, such as the inability to accurately decode imagined representations or when participants are engaged in disruptive tasks.

While the potential of decoding brain activity is promising, it is crucial to ensure consent and address ethical concerns before this technology is used on a large scale.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts