Mind as statistical learner



Models of mind popular amongst ML nerds. Various morsels on the theme of what-machine-learning-teaches-us-about-our-own-learning. Thus biomimetic algorithms find their converse in our algo-mimetic biology, perhaps.

This should be more about general learning theory insights. Nitty gritty details about how computing is done by biological systems is more what I think of as biocomputing. If you can unify those then well done, you can grow minds in a petri dish.

Eliezer Yudkowsky’s essay, How an algorithm feels from the inside.

Language theory

The OG test-case of mind-like behaviour is grammatical inference, where a lot of ink was spilled over learnability of languages of various kinds. This is less popular nowadays, where Natural language processing by computer is doing rather interesting things without bothering with the details of formal syntax or traditional semantics. What does that mean? I do not hazard opinions on that because I am too busy for now to form them.

Descriptive statistical models of cognition

See, e.g. a Bayesian model of human problem-solving, Probabilistic Models of Cognition, by Noah Goodman and Joshua Tenenbaum and others, which is also a probabilitic programming textbook.

This book explores the probabilistic approach to cognitive science, which models learning and reasoning as inference in complex probabilistic models. We examine how a broad range of empirical phenomena, including intuitive physics, concept learning, causal reasoning, social cognition, and language understanding, can be modeled using probabilistic programs (using the WebPPL language).

Disclaimer, I have not actually read the book.

This descriptive model is not the same thing as the normative model of Bayesian cognition.

(Dezfouli, Nock, and Dayan 2020; Peterson et al. 2021) do something different again, finding ML models that are good β€œsecond-order” fits to how people seem to learn things in practice.

Free energy principle

See predictive coding.

References

Blazek, Paul J., and Milo M. Lin. 2020. β€œA Neural Network Model of Perception and Reasoning.” arXiv:2002.11319 [Cs, q-Bio], February.
Dezfouli, Amir, Richard Nock, and Peter Dayan. 2020. β€œAdversarial Vulnerabilities of Human Decision-Making.” Proceedings of the National Academy of Sciences 117 (46): 29221–28.
Freer, Cameron E., Daniel M. Roy, and Joshua B. Tenenbaum. 2012. β€œTowards common-sense reasoning via conditional simulation: legacies of Turing in Artificial Intelligence.” In Turing’s Legacy: Developments from Turing’s Ideas in Logic. Cambridge, United Kingdom: Cambridge University Press.
Friston, Karl. 2010. β€œThe Free-Energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience 11 (2): 127.
β€”β€”β€”. 2013. β€œLife as We Know It.” Journal of The Royal Society Interface 10 (86).
Glymour, Clark. 2007. β€œWhen Is a Brain Like the Planet?” Philosophy of Science 74 (3): 330–46.
Gold, E Mark. 1967. β€œLanguage Identification in the Limit.” Information and Control 10 (5): 447–74.
Gopnik, Alison. 2020. β€œChildhood as a Solution to Explore–Exploit Tensions.” Philosophical Transactions of the Royal Society B: Biological Sciences 375 (1803): 20190502.
Greibach, Sheila A. 1966. β€œThe Unsolvability of the Recognition of Linear Context-Free Languages.” J. ACM 13 (4): 582–87.
Griffiths, Thomas L, Nick Chater, Charles Kemp, Amy Perfors, and Joshua B Tenenbaum. 2010. β€œProbabilistic Models of Cognition: Exploring Representations and Inductive Biases.” Trends in Cognitive Sciences 14 (8): 357–64.
Hasson, Uri, Samuel A. Nastase, and Ariel Goldstein. 2020. β€œDirect Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.” Neuron 105 (3): 416–34.
Kemp, Charles, and Joshua B Tenenbaum. 2008. β€œThe Discovery of Structural Form.” Proceedings of the National Academy of Sciences 105 (31): 10687–92.
ma, wei jin, Konrad Paul Kording, and Daniel Goldreich. n.d. Bayesian Models of Perception and Action.
Ma, Wei Ji, and Benjamin Peters. 2020. β€œA Neural Network Walks into a Lab: Towards Using Deep Nets as Models for Human Behavior.” arXiv:2005.02181 [Cs, q-Bio], May.
Mansinghka, Vikash, Charles Kemp, Thomas Griffiths, and Joshua Tenenbaum. 2012. β€œStructured Priors for Structure Learning.” arXiv:1206.6852, June.
Millidge, Beren, Alexander Tschantz, and Christopher L. Buckley. 2020. β€œPredictive Coding Approximates Backprop Along Arbitrary Computation Graphs.” arXiv:2006.04182 [Cs], October.
O’Donnell, Timothy J., Joshua B. Tenenbaum, and Noah D. Goodman. 2009. β€œFragment Grammars: Exploring Computation and Reuse in Language,” March.
Peterson, Joshua C., David D. Bourgin, Mayank Agrawal, Daniel Reichman, and Thomas L. Griffiths. 2021. β€œUsing Large-Scale Experiments and Machine Learning to Discover Theories of Human Decision-Making.” Science 372 (6547): 1209–14.
Saxe, Andrew, Stephanie Nelli, and Christopher Summerfield. 2020. β€œIf Deep Learning Is the Answer, Then What Is the Question?” arXiv:2004.07580 [q-Bio], April.
Steyvers, Mark, and Joshua B. Tenenbaum. 2005. β€œThe Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.” Cognitive Science 29 (1): 41–78.
Tenenbaum, Joshua B, Charles Kemp, Thomas L Griffiths, and Noah D Goodman. 2011. β€œHow to Grow a Mind: Statistics, Structure, and Abstraction.” Science 331 (6022): 1279.
Ullman, Tomer D., Noah D. Goodman, and Joshua B. Tenenbaum. 2012. β€œTheory Learning as Stochastic Search in the Language of Thought.” Cognitive Development.
Vanchurin, Vitaly, Yuri I. Wolf, Mikhail Katsnelson, and Eugene V. Koonin. 2021. β€œTowards a Theory of Evolution as Multilevel Learning.” Cold Spring Harbor Laboratory.
Williams, Daniel. 2020. β€œPredictive Coding and Thought.” Synthese 197 (4): 1749–75.
Wolff, J Gerard. 2000. β€œSyntax, Parsing and Production of Natural Language in a Framework of Information Compression by Multiple Alignment, Unification and Search.” Journal of Universal Computer Science 6 (8): 781–829.
Yuan, Lei, Violet Xiang, David Crandall, and Linda Smith. 2020. β€œLearning the generative principles of a symbol system from limited examples.” Cognition 200 (July): 104243.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.