Mind reading by computer

The ultimate inverse problem



A placeholder.

I’d like to know how good the results are getting in this area, and how general across people/technologies etc. How close are we to the point that someone can put an arbitrary individual in some kind of tomography machine and say what they are thinking without pre-training or priming?

Base level: brain imaging

The instruments we have are blunt. Consider, could a neuroscientist even understand a microprocessor? (Jonas and Kording 2017) What hope is there of brains?

TODO: discuss the infamously limp state of fMRI inference, problem of multiple testing in correlated fields etc.

TBC.

Advanced: brain decoding

Assuming you can get information out of your instruments, can you decode something meaningful. Marcel Just et al do a lot of this. It for sure leads to fun press releases, e.g. CMU Scientists Harness “Mind Reading” Technology to Decode Complex Thoughts but I need time to see details to understand how much progress they are making towards the science-fiction version(Wang, Cherkassky, and Just 2017)

Researchers watch video images people are seeing decoded from their fMRI brain scans in near-real-time. If you want to have a crack at this yourself, you might check out Katja Seeliger’s mind reading datasets.

More intrusively, in rats… Real-time readouts of location memory:

by recording the electrical activity of groups of neurons in key areas of the brain they could read a rat’s thoughts of where it was, both after it actually ran the maze and also later when it would dream of running the maze in its sleep

References

Boettiger, Carl. 2015. An Introduction to Docker for Reproducible Research, with Examples from the R Environment.” ACM SIGOPS Operating Systems Review 49 (1): 71–79.
Cox, Christopher R., and Timothy T. Rogers. 2021. Finding Distributed Needles in Neural Haystacks.” Journal of Neuroscience 41 (5): 1019–32.
Davidson, Thomas J., Fabian Kloosterman, and Matthew A. Wilson. 2009. Hippocampal Replay of Extended Experience.” Neuron 63 (4): 497–507.
Davis III, Keith M., Michiel Spapé, and Tuukka Ruotsalo. 2021. Collaborative Filtering with Preferences Inferred from Brain Signals.” In Proceedings of the Web Conference 2021, 602–11. WWW ’21. New York, NY, USA: Association for Computing Machinery.
Jonas, Eric, and Konrad Paul Kording. 2017. Could a Neuroscientist Understand a Microprocessor? PLOS Computational Biology 13 (1): e1005268.
Le, Lynn, Luca Ambrogioni, Katja Seeliger, Yağmur Güçlütürk, Marcel van Gerven, and Umut Güçlü. 2021. Brain2Pix: Fully Convolutional Naturalistic Video Reconstruction from Brain Activity.” bioRxiv, February, 2021.02.02.429430.
Miyawaki, Yoichi, Hajime Uchida, Okito Yamashita, Masa-aki Sato, Yusuke Morito, Hiroki C. Tanabe, Norihiro Sadato, and Yukiyasu Kamitani. 2008. Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders.” Neuron 60 (5): 915–29.
Nishimoto, Shinji, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, and Jack L. Gallant. 2011. Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies.” Current Biology 21 (19): 1641–46.
Shen, Guohua, Tomoyasu Horikawa, Kei Majima, and Yukiyasu Kamitani. 2017. Deep Image Reconstruction from Human Brain Activity.” bioRxiv, December, 240317.
Wang, Jing, Vladimir L. Cherkassky, and Marcel Adam Just. 2017. Predicting the Brain Activation Pattern Associated with the Propositional Content of a Sentence: Modeling Neural Representations of Events and States: Modeling Neural Representations of Events and States.” Human Brain Mapping, June.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.