Mind as statistical learner



The figure represents the contents of the consciousness; a part, under A, being present in attention, the portion P representing self-consciousness ; a part, under B, being outside the range of attention and hence subconscious, a part, under C, being so far removed from consciousness as to be almost inaccessible.

Models of mind popular amongst ML nerds. Various morsels on the theme of what-machine-learning-teaches-us-about-our-own-learning. Thus biomimetic algorithms find their converse in our algo-mimetic biology, perhaps.

This should be more about general learning theory insights. Nitty gritty details about how computing is done by biological systems is more what I think of as biocomputing. If you can unify those then well done, you can grow minds in a petri dish.

Eliezer Yudkowsky’s essay, How an algorithm feels from the inside.

Language theory

The OG test-case of mind-like behaviour is grammatical inference, where a lot of ink was spilled over learnability of languages of various kinds. This is less popular nowadays, where Natural language processing by computer is doing rather interesting things without bothering with the details of formal syntax or traditional semantics. What does that mean? I do not hazard opinions on that because I am too busy for now to form them.

Descriptive statistical models of cognition

See, e.g. a Bayesian model of human problem-solving, Probabilistic Models of Cognition, by Noah Goodman and Joshua Tenenbaum and others, which is also a probabilitic programming textbook.

This book explores the probabilistic approach to cognitive science, which models learning and reasoning as inference in complex probabilistic models. We examine how a broad range of empirical phenomena, including intuitive physics, concept learning, causal reasoning, social cognition, and language understanding, can be modeled using probabilistic programs (using the WebPPL language).

Disclaimer, I have not actually read the book.

This descriptive model is not the same thing as the normative model of Bayesian cognition.

(Dezfouli, Nock, and Dayan 2020; Peterson et al. 2021) do something different again, finding ML models that are good “second-order” fits to how people seem to learn things in practice.

(more) Biologically plausible neural nets

See, e.g. forward-forward.

Free energy principle

See predictive coding.

Life as ML

Michael Levin talks a good game here:

SC: Does this whole philosophy help us, either philosophically or practically, when it comes to our ambitions to go in there and change organisms, not just solve, cure diseases, but to make new organisms to do synthetic biology to create new things from scratch and vice versa, does it help us in what we would think of usually as robotics or technology, can we learn lessons from the biological side of things?

ML: Yeah, I think absolutely. And there’s two ways to… There’s sort of a short-term view and a longer-term view of this. The short-term view is that, absolutely, so we work very closely with roboticists to take deep concepts in both directions. So on the one hand, take the things that we’ve learned from the robustness and intelligence… I mean, the intelligent problem-solving of these living forms is incredibly high, and even organisms without brains, this whole focus on kind of like neuromorphic architectures for AI, I think is really a very limiting way to look at it. And so we try very hard to export some of these concepts into machine learning, into robotics, and so on, multi-scale robotics… I gave a talk called why robots don’t get cancer. And this is, this is exactly the problem, is we make devices where the pieces don’t have sub-goals, and that’s the good news is, yes, no, you’re not going to have a robots where part of it decides to defect and do something different, but on the other hand, the robots aren’t very good, they’re not very flexible.

ML: So part of this we’re trying to export, and then going in the other direction and take interesting concepts from computer science, from cognitive science, into biology to help us understand how this works. I fundamentally think that computer science and biology are not really different fields, I think we are all studying computation just in different media, and I do think there’s a lot of opportunity for back and forth. But now, the other thing that you mentioned is really important, which is the creation of novel systems. We are doing some work on synthetic living machines and creating new life forms by basically taking perfectly normal cells and giving them additional freedom and then some stimulation to become other types of organisms.

ML: We, I think in our lifetime, I think, we are going to be surrounded by… Darwin had this phrase, endless forms most beautiful. I think the reality is going to be a variety of living agents that he couldn’t have even conceived of, in the sense that the space, and this is something I’m working on now, is to map out at least the axes of this option space of all possible agents, because what the bioengineering is enabling us to do is to create hybrid… To create hybrid agents that are in part biological, in part electronic, the parts are designed, parts are evolved. The parts that are evolved might have been biologically evolved or they might have been evolved in a virtual environment using genetic algorithms on a computer, all of these combinations, and this… We’re going to see everything from household appliances that are run in part by machine learning and part by living brains that are sort of being controllers for various things that we would like to optimize, to humans and animals that have various implants that may allow them to control other devices and communicate with each other.

References

Addicott, Merideth A., John M. Pearson, Julia C. Schechter, Jeffrey J. Sapyta, Margaret D. Weiss, and Scott H. Kollins. 2021. Attention-Deficit/Hyperactivity Disorder and the Explore/Exploit Trade-Off.” Neuropsychopharmacology 46 (3): 614–21.
Aimone, James B., and Ojas Parekh. 2023. The Brain’s Unique Take on Algorithms.” Nature Communications 14 (1): 4910.
Beniaguev, David, Idan Segev, and Michael London. 2021. Single Cortical Neurons as Deep Artificial Neural Networks.” Neuron 109 (17): 2727–2739.e3.
Blazek, Paul J., and Milo M. Lin. 2020. A Neural Network Model of Perception and Reasoning.” arXiv:2002.11319 [Cs, q-Bio], February.
Dabagia, Max, Christos H. Papadimitriou, and Santosh S. Vempala. 2023. Computation with Sequences in the Brain.” arXiv.
Dabagia, Max, Santosh S. Vempala, and Christos Papadimitriou. 2022. Assemblies of Neurons Learn to Classify Well-Separated Distributions.” In Proceedings of Thirty Fifth Conference on Learning Theory, 3685–3717. PMLR.
Dezfouli, Amir, Richard Nock, and Peter Dayan. 2020. Adversarial Vulnerabilities of Human Decision-Making.” Proceedings of the National Academy of Sciences 117 (46): 29221–28.
Drugowitsch, Jan, André G. Mendonça, Zachary F. Mainen, and Alexandre Pouget. 2019. “Learning Optimal Decisions with Confidence.” Proceedings of the National Academy of Sciences 116 (49): 24872–80.
Freer, Cameron E., Daniel M. Roy, and Joshua B. Tenenbaum. 2012. Towards common-sense reasoning via conditional simulation: legacies of Turing in Artificial Intelligence.” In Turing’s Legacy: Developments from Turing’s Ideas in Logic. Cambridge, United Kingdom: Cambridge University Press.
Friston, Karl. 2010. The Free-Energy Principle: A Unified Brain Theory? Nature Reviews Neuroscience 11 (2): 127.
———. 2013. Life as We Know It.” Journal of The Royal Society Interface 10 (86).
Glymour, Clark. 2007. When Is a Brain Like the Planet? Philosophy of Science 74 (3): 330–46.
Gold, E Mark. 1967. Language Identification in the Limit.” Information and Control 10 (5): 447–74.
Gopnik, Alison. 2020. Childhood as a Solution to Explore–Exploit Tensions.” Philosophical Transactions of the Royal Society B: Biological Sciences 375 (1803): 20190502.
Greibach, Sheila A. 1966. The Unsolvability of the Recognition of Linear Context-Free Languages.” J. ACM 13 (4): 582–87.
Griffiths, Thomas L, Nick Chater, Charles Kemp, Amy Perfors, and Joshua B Tenenbaum. 2010. Probabilistic Models of Cognition: Exploring Representations and Inductive Biases.” Trends in Cognitive Sciences 14 (8): 357–64.
Hasson, Uri, Samuel A. Nastase, and Ariel Goldstein. 2020. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.” Neuron 105 (3): 416–34.
Hinton, Geoffrey. n.d. The Forward-Forward Algorithm: Some Preliminary Investigations,” 17.
Hoel, Erik. 2021. The Overfitted Brain: Dreams Evolved to Assist Generalization.” Patterns 2 (5): 100244.
Hulsbosch, An-Katrien, Tom Beckers, Hasse De Meyer, Marina Danckaerts, Dagmar Van Liefferinge, Gail Tripp, and Saskia Van der Oord. n.d. Instrumental Learning and Behavioral Persistence in Children with Attention-Deficit/Hyperactivity-Disorder: Does Reinforcement Frequency Matter? Journal of Child Psychology and Psychiatry n/a (n/a).
Jaeger, Herbert, Beatriz Noheda, and Wilfred G. van der Wiel. 2023. Toward a Formal Theory for Computing Machines Made Out of Whatever Physics Offers.” Nature Communications 14 (1): 4911.
Kemp, Charles, and Joshua B Tenenbaum. 2008. The Discovery of Structural Form.” Proceedings of the National Academy of Sciences 105 (31): 10687–92.
Kosinski, Michal. 2023. Theory of Mind May Have Spontaneously Emerged in Large Language Models.” arXiv.
Kosoy, Eliza, David M. Chan, Adrian Liu, Jasmine Collins, Bryanna Kaufmann, Sandy Han Huang, Jessica B. Hamrick, John Canny, Nan Rosemary Ke, and Alison Gopnik. 2022. Towards Understanding How Machines Can Learn Causal Overhypotheses.” arXiv.
Lee, Jee Hang, Joel Z. Leibo, Su Jin An, and Sang Wan Lee. 2022. Importance of prefrontal meta control in human-like reinforcement learning.” Frontiers in Computational Neuroscience 16 (December).
Lillicrap, Timothy P, and Adam Santoro. 2019. Backpropagation Through Time and the Brain.” Current Opinion in Neurobiology, Machine Learning, Big Data, and Neuroscience, 55 (April): 82–89.
Ma, Wei Jin, Konrad Paul Kording, and Daniel Goldreich. 2022. Bayesian Models of Perception and Action.
Ma, Wei Ji, and Benjamin Peters. 2020. A Neural Network Walks into a Lab: Towards Using Deep Nets as Models for Human Behavior.” arXiv:2005.02181 [Cs, q-Bio], May.
Mainen, Zachary F., Michael Häusser, and Alexandre Pouget. 2016. “A Better Way to Crack the Brain.” Nature 539 (7628): 159–61.
Mansinghka, Vikash, Charles Kemp, Thomas Griffiths, and Joshua Tenenbaum. 2012. Structured Priors for Structure Learning.” arXiv:1206.6852, June.
McGee, Ryan Seamus, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, and Carl T. Bergstrom. 2022. The Cost of Information Acquisition by Natural Selection.” bioRxiv.
Meyniel, Florent, Mariano Sigman, and Zachary F. Mainen. 2015. “Confidence as Bayesian Probability: From Neural Origins to Behavior.” Neuron 88 (1): 78–92.
Millidge, Beren, Alexander Tschantz, and Christopher L. Buckley. 2020. Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs.” arXiv:2006.04182 [Cs], October.
Mitropolsky, Daniel, Michael J. Collins, and Christos H. Papadimitriou. 2021. A Biologically Plausible Parser.” Cambridge, MA: MIT Press.
Nissan, Noyli, Uri Hertz, Nitzan Shahar, and Yafit Gabay. 2023. Distinct Reinforcement Learning Profiles Distinguish Between Language and Attentional Neurodevelopmental Disorders.” Behavioral and Brain Functions 19 (1): 6.
O’Donnell, Timothy J., Joshua B. Tenenbaum, and Noah D. Goodman. 2009. Fragment Grammars: Exploring Computation and Reuse in Language,” March.
Ororbia, Alexander, and Ankur Mali. 2023. The Predictive Forward-Forward Algorithm.”
Papadimitriou, Christos H., and Santosh S. Vempala. 2018. Random Projection in the Brain and Computation with Assemblies of Neurons.” In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019), edited by Avrim Blum, 124:57:1–19. Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
Papadimitriou, Christos H., Santosh S. Vempala, Daniel Mitropolsky, Michael Collins, and Wolfgang Maass. 2020. Brain computation by assemblies of neurons.” Proceedings of the National Academy of Sciences of the United States of America 117 (25): 14464–72.
Peterson, Joshua C., David D. Bourgin, Mayank Agrawal, Daniel Reichman, and Thomas L. Griffiths. 2021. Using Large-Scale Experiments and Machine Learning to Discover Theories of Human Decision-Making.” Science 372 (6547): 1209–14.
Pollak, Yehuda. 2023. Poor Learning or Hyper‐exploration?: A Commentary on Hulsbosch Et Al. (2023).” Journal of Child Psychology and Psychiatry, August, jcpp.13875.
Porr, Bernd, and Paul Miller. 2020. Forward Propagation Closed Loop Learning.” Adaptive Behavior 28 (3): 181–94.
Ren, Mengye, Simon Kornblith, Renjie Liao, and Geoffrey Hinton. 2022. Scaling Forward Gradient With Local Losses.” arXiv.
Robertazzi, Federica, Matteo Vissani, Guido Schillaci, and Egidio Falotico. 2022. Brain-Inspired Meta-Reinforcement Learning Cognitive Control in Conflictual Inhibition Decision-Making Task for Artificial Agents.” Neural Networks 154 (October): 283–302.
Saxe, Andrew, Stephanie Nelli, and Christopher Summerfield. 2020. If Deep Learning Is the Answer, Then What Is the Question? arXiv:2004.07580 [q-Bio], April.
Shiffrin, Richard, and Melanie Mitchell. 2023. Probing the Psychology of AI Models.” Proceedings of the National Academy of Sciences 120 (10): e2300963120.
Smith, Ryan, Samuel Taylor, Robert C. Wilson, Anne E. Chuning, Michelle R. Persich, Siyu Wang, and William D. S. Killgore. 2022. Lower Levels of Directed Exploration and Reflective Thinking Are Associated With Greater Anxiety and Depression.” Frontiers in Psychiatry 12.
Starr, M. Allen (Moses Allen). 1913. Organic and functional nervous diseases; a text-book of neurology. New York, Philadelphia, Lea & Febiger.
Steyvers, Mark, and Joshua B. Tenenbaum. 2005. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.” Cognitive Science 29 (1): 41–78.
Tenenbaum, Joshua B, Charles Kemp, Thomas L Griffiths, and Noah D Goodman. 2011. How to Grow a Mind: Statistics, Structure, and Abstraction.” Science 331 (6022): 1279.
Ullman, Tomer D., Noah D. Goodman, and Joshua B. Tenenbaum. 2012. Theory Learning as Stochastic Search in the Language of Thought.” Cognitive Development.
Vanchurin, Vitaly, Yuri I. Wolf, Mikhail Katsnelson, and Eugene V. Koonin. 2021. Towards a Theory of Evolution as Multilevel Learning.” Cold Spring Harbor Laboratory.
Wang, Jane X., Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Demis Hassabis, and Matthew Botvinick. 2018. Prefrontal cortex as a meta-reinforcement learning system.” Nature Neuroscience 21 (6): 860–68.
Williams, Daniel. 2020. Predictive Coding and Thought.” Synthese 197 (4): 1749–75.
Wolff, J Gerard. 2000. “Syntax, Parsing and Production of Natural Language in a Framework of Information Compression by Multiple Alignment, Unification and Search.” Journal of Universal Computer Science 6 (8): 781–829.
Yuan, Lei, Violet Xiang, David Crandall, and Linda Smith. 2020. Learning the generative principles of a symbol system from limited examples.” Cognition 200 (July): 104243.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.