A placeholder.
I’d like to know how good the results are getting in this area,
and how general across people/technologies etc.
How close are we to the point that someone can put an arbitrary individual in
some kind of tomography machine and say what they are thinking
without pre-training or priming?
Marcel Just et al do a lot of this.
It for sure leads to fun press releases, e.g.
CMU Scientists Harness “Mind Reading” Technology to Decode Complex Thoughts
but I need time to see details to understand how much progress
they are making towards the science-fiction version. (Wang, Cherkassky, and Just 2017)
Researchers watch video images people are seeing
decoded from their fMRI brain scans in near-real-time.
If you want to have a crack at thsi yourself, you might check out
Katja Seeliger’s mind reading datasets.
More intrusively, in rats…
Real-time readouts of location memory:
by recording the electrical activity of groups of neurons in key areas of the
brain they could read a rat’s thoughts of where it was, both after it actually
ran the maze and also later when it would dream of running the maze in its
sleep
But: could a neuroscientist even understand a microprocessor? (Jonas and Kording 2017)
What hope is there of brains?
References
Boettiger, Carl. 2015.
“An Introduction to Docker for Reproducible Research, with Examples from the R Environment.” ACM SIGOPS Operating Systems Review 49 (1): 71–79.
https://doi.org/10.1145/2723872.2723882.
Davidson, Thomas J., Fabian Kloosterman, and Matthew A. Wilson. 2009.
“Hippocampal Replay of Extended Experience.” Neuron 63 (4): 497–507.
https://doi.org/10.1016/j.neuron.2009.07.027.
Jonas, Eric, and Konrad Paul Kording. 2017.
“Could a Neuroscientist Understand a Microprocessor?” PLOS Computational Biology 13 (1): e1005268.
https://doi.org/10.1371/journal.pcbi.1005268.
Miyawaki, Yoichi, Hajime Uchida, Okito Yamashita, Masa-aki Sato, Yusuke Morito, Hiroki C. Tanabe, Norihiro Sadato, and Yukiyasu Kamitani. 2008.
“Visual Image Reconstruction from Human Brain Activity Using a Combination of Multiscale Local Image Decoders.” Neuron 60 (5): 915–29.
https://doi.org/10.1016/j.neuron.2008.11.004.
Nishimoto, Shinji, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, and Jack L. Gallant. 2011.
“Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies.” Current Biology 21 (19): 1641–46.
https://doi.org/10.1016/j.cub.2011.08.031.
Shen, Guohua, Tomoyasu Horikawa, Kei Majima, and Yukiyasu Kamitani. 2017.
“Deep Image Reconstruction from Human Brain Activity.” bioRxiv, December, 240317.
https://doi.org/10.1101/240317.
Wang, Jing, Vladimir L. Cherkassky, and Marcel Adam Just. 2017.
“Predicting the Brain Activation Pattern Associated with the Propositional Content of a Sentence: Modeling Neural Representations of Events and States: Modeling Neural Representations of Events and States.” Human Brain Mapping, June.
https://doi.org/10.1002/hbm.23692.