Models of mind popular amongst ML nerds. Various morsels on the theme of what-machine-learning-teaches-us-about-our-own-learning. Thus biomimetic algorithms find their converse in our algo-mimetic biology, perhaps.
This should be more about general learning theory insights. Nitty gritty details about how computing is done by biological systems is more what I think of as biocomputing. If you can unify those then well done, you can grow minds in a petri dish.
Eliezer Yudkowskyβs essay, How an algorithm feels from the inside.
Language theory
The OG test-case of mind-like behaviour is grammatical inference, where a lot of ink was spilled over learnability of languages of various kinds. This is less popular nowadays, where Natural language processing by computer is doing rather interesting things without bothering with the details of formal syntax or traditional semantics. What does that mean? I do not hazard opinions on that because I am too busy for now to form them.
Descriptive statistical models of cognition
See, e.g. a Bayesian model of human problem-solving, Probabilistic Models of Cognition, by Noah Goodman and Joshua Tenenbaum and others, which is also a probabilitic programming textbook.
This book explores the probabilistic approach to cognitive science, which models learning and reasoning as inference in complex probabilistic models. We examine how a broad range of empirical phenomena, including intuitive physics, concept learning, causal reasoning, social cognition, and language understanding, can be modeled using probabilistic programs (using the WebPPL language).
Disclaimer, I have not actually read the book.
This descriptive model is not the same thing as the normative model of Bayesian cognition.
(Dezfouli, Nock, and Dayan 2020; Peterson et al. 2021) do something different again, finding ML models that are good βsecond-orderβ fits to how people seem to learn things in practice.
Free energy principle
See predictive coding.
No comments yet. Why not leave one?