Mind reading by computer

The ultimate inverse problem

A placeholder.

I’d like to know how good the results are getting in this area, and how general across people/technologies etc. How close are we to the point that someone can put an arbitrary individual in some kind of tomography machine and say what they are thinking without pre-training or priming?

Base level: brain imaging

The instruments we have are blunt. Consider, could a neuroscientist even understand a microprocessor? (Jonas and Kording 2017) What hope is there of brains?

TODO: discuss the infamously limp state of fMRI inference, problem of multiple testing in correlated fields etc.


Advanced: brain decoding

Assuming you can get information out of your instruments, can you decode something meaningful. Marcel Just et al do a lot of this. It for sure leads to fun press releases, e.g. CMU Scientists Harness “Mind Reading” Technology to Decode Complex Thoughts but I need time to see details to understand how much progress they are making towards the science-fiction version(Wang, Cherkassky, and Just 2017)

Researchers watch video images people are seeing decoded from their fMRI brain scans in near-real-time. If you want to have a crack at this yourself, you might check out Katja Seeliger’s mind reading datasets.

More intrusively, in rats… Real-time readouts of location memory:

by recording the electrical activity of groups of neurons in key areas of the brain they could read a rat’s thoughts of where it was, both after it actually ran the maze and also later when it would dream of running the maze in its sleep


Boettiger, Carl. 2015. “An Introduction to Docker for Reproducible Research, with Examples from the R Environment.” ACM SIGOPS Operating Systems Review 49 (1): 71–79. https://doi.org/10.1145/2723872.2723882.
Cox, Christopher R., and Timothy T. Rogers. 2021. “Finding Distributed Needles in Neural Haystacks.” Journal of Neuroscience 41 (5): 1019–32. https://doi.org/10.1523/JNEUROSCI.0904-20.2020.
Davidson, Thomas J., Fabian Kloosterman, and Matthew A. Wilson. 2009. “Hippocampal Replay of Extended Experience.” Neuron 63 (4): 497–507. https://doi.org/10.1016/j.neuron.2009.07.027.
Jonas, Eric, and Konrad Paul Kording. 2017. “Could a Neuroscientist Understand a Microprocessor?” PLOS Computational Biology 13 (1): e1005268. https://doi.org/10.1371/journal.pcbi.1005268.
Le, Lynn, Luca Ambrogioni, Katja Seeliger, Yağmur Güçlütürk, Marcel van Gerven, and Umut Güçlü. 2021. Brain2Pix: Fully Convolutional Naturalistic Video Reconstruction from Brain Activity.” bioRxiv, February, 2021.02.02.429430. https://doi.org/10.1101/2021.02.02.429430.
Miyawaki, Yoichi, Hajime Uchida, Okito Yamashita, Masa-aki Sato, Yusuke Morito, Hiroki C. Tanabe, Norihiro Sadato, and Yukiyasu Kamitani. 2008. “Visual Image Reconstruction from Human Brain Activity Using a Combination of Multiscale Local Image Decoders.” Neuron 60 (5): 915–29. https://doi.org/10.1016/j.neuron.2008.11.004.
Nishimoto, Shinji, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, and Jack L. Gallant. 2011. “Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies.” Current Biology 21 (19): 1641–46. https://doi.org/10.1016/j.cub.2011.08.031.
Shen, Guohua, Tomoyasu Horikawa, Kei Majima, and Yukiyasu Kamitani. 2017. “Deep Image Reconstruction from Human Brain Activity.” bioRxiv, December, 240317. https://doi.org/10.1101/240317.
Wang, Jing, Vladimir L. Cherkassky, and Marcel Adam Just. 2017. “Predicting the Brain Activation Pattern Associated with the Propositional Content of a Sentence: Modeling Neural Representations of Events and States: Modeling Neural Representations of Events and States.” Human Brain Mapping, June. https://doi.org/10.1002/hbm.23692.

Warning! Experimental comments system! If is does not work for you, let me know via the contact form.

No comments yet!

GitHub-flavored Markdown & a sane subset of HTML is supported.