The ability to see a persons thoughts sounds like something from science fiction but last year at the Society for Neuroscience meeting Jack Gallanta leading neural decoder at the University of CaliforniaBerkeleypresented some impressive results. They have developed a computational model that uses functional MRI (fMRI) data to decode information from an individual's visual cortex - the part of the brain responsible for processing visual stimuli. He and colleague Shinji Nishimoto showed that they could create a crude reproduction of a movie clip that someone was watching just by viewing their brain activity.
They used fMRI to measure visual cortex activity in people looking at more than a thousand photographs. This allowed them to develop a computational model and "train" their decoder to understand how each person's visual cortex processes information.
Nextparticipants were shown a random set of just over 100 previously unseen photographs. Based on patterns identified in the first set of fMRIsthe team was able to accurately predict which image was being observed.
Scientists just may one day be able to extract dreamsmemories and imagery.
Lisa Katayama (who had a nice IGNITE presentation last year on Japanese gadgets well worth watching) has written an excellent piece in Popular Science where she spells out much of the current work in this field. She even participated in an experiment with the mind reading technology.
Ten minutes feels like an eternitybut finally the fMRI announces the conclusion of its program with another loud beep. The researchers remove me from my bind and escort me to the control roomwhere a giant monitor is displaying 30 scanned images of my brain from different angles. I see bunches of white squiggly lines and light gray V shapes inside rows of gray circles. “That’s it? That’s my brain?” I askmy head foggy from having tried so hard to stay still. It surprises me that all the goings-on in my mind can be reduced to a bunch of geometric shapes. Gallant tells me that brain activity is basically just a bunch of neurons firing—an estimated 300 million in the primary visual cortex aloneaccording to the latest research.
To help make sense of the shapesthe brain scanner divides them up into a grid of three-dimensional cube-like structures called volume pixelsor voxels. To meeach voxel looks like a random mix of whitesgrays and blacks. But to Gallant’s computer modelwhich can see more-precise data in those shadesthe voxels are a meaningful matrix of zeroes and ones. By crunching this matrixit can transform the shapes back into a remarkably accurate rendering of the Einstein Guy or the grazing sheep. Gallant and his team didn’t have time to generate enough scans of my brain to make their algorithm workbut they showed me some convincing results from other volunteers. “It’s not perfect,” says Shinji Nishimotoone of Gallant’s postdocs“but we’re getting pretty close.”
Someone's brain has to have already been scanned multiple times for this technology to workand only then in certain circumstances. But there are still some interesting possibilities for how this might be used... Imagine some of the possibilities for communicating with the severely disabled or other possible therapeutic uses. This is some fascinating stuff!
Now this may not actually be what people think of when they consider mind readingalthough John-Dylan Haynes of the Max Planck Institute for Human Cognitive and Brain Sciences is working on a project "Decoding of conscious and unconscious mental states" which might be closer to what we actually imagine as reading thoughts. By showing peopleincluding some with eating disordersimages of foodHaynes's team could determine which suffered from eating disorders via brain activity in one of the brain's reward centers.
Another interesting focus of neural decoding is language. Marcel Just and his colleague Tom Mitchell of Carnegie Mellon reported that they could predict which of two nouns - such as "celery" and "airplane" - a subject is thinking ofat rates well above chance. They are now working on two-word phrases. Turning brain scans into short sentences may be still a long way offbut getting my thoughts into a tweet appears to be a fairly complicated scientific process.