Brain-like neuronal computation



Neural networks which are more biologically plausible than what we typically refer to as (artificial) neural networks. Synthetic brains, if you’d like.

Forward-forward networks

Neural networks without backprop are β€œmore” biologically plausible. Here is a class of such networks (Hinton, n.d.; Ren et al. 2022).

The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth serious investigation. The Forward-Forward algorithm replaces the forward and backward passes of backpropagation by two forward passes, one with positive (i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes can be separated in time, the negative passes can be done offline, which makes the learning much simpler in the positive pass and allows video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.

NEMO

Christos Papadimitriou, How Does the Brain Create Language? | EECS at UC Berkeley

There is little doubt that cognitive phenomena are the result of neural activity. However, there has been slow progress toward articulating an overarching computational theory of how exactly this happens. I will discuss a simplified mathematical model of the brain, which we call NEMO, involving brain areas, spiking neurons, random synapses, local inhibition, Hebbian plasticity, and long-range interneurons. Emergent behaviors of the resulting dynamical system -- established both analytically and through simulations -- include assemblies of neurons, sequence memorization, one-shot learning, and universal computation. NEMO can also be seen as a software-based neuromorphic system that can be simulated efficiently at the scale of tens of millions of neurons, emulating certain high-level cognitive phenomena such as planning and parsing of natural language. I will describe current work aiming at creating through NEMO a neuromorphic language organ: a neural tabula rasa which, on input consisting of a modest amount of grounded language, is capable of language acquisition: lexicon, syntax, semantics, comprehension, and generation. Finally, and on the plane of scientific methodology, I will argue that experimenting with such brain-like devices, devoid of backpropagation, can reveal novel avenues to learning, and may end up advancing AI.

Spiking

TBD

References

Beniaguev, David, Idan Segev, and Michael London. 2021. β€œSingle Cortical Neurons as Deep Artificial Neural Networks.” Neuron 109 (17): 2727–2739.e3.
Blazek, Paul J., and Milo M. Lin. 2020. β€œA Neural Network Model of Perception and Reasoning.” arXiv:2002.11319 [Cs, q-Bio], February.
Dabagia, Max, Christos H. Papadimitriou, and Santosh S. Vempala. 2023. β€œComputation with Sequences in the Brain.” arXiv.
Dabagia, Max, Santosh S. Vempala, and Christos Papadimitriou. 2022. β€œAssemblies of Neurons Learn to Classify Well-Separated Distributions.” In Proceedings of Thirty Fifth Conference on Learning Theory, 3685–3717. PMLR.
Dezfouli, Amir, Richard Nock, and Peter Dayan. 2020. β€œAdversarial Vulnerabilities of Human Decision-Making.” Proceedings of the National Academy of Sciences 117 (46): 29221–28.
Friston, Karl. 2010. β€œThe Free-Energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience 11 (2): 127.
Griffiths, Thomas L, Nick Chater, Charles Kemp, Amy Perfors, and Joshua B Tenenbaum. 2010. β€œProbabilistic Models of Cognition: Exploring Representations and Inductive Biases.” Trends in Cognitive Sciences 14 (8): 357–64.
Hasson, Uri, Samuel A. Nastase, and Ariel Goldstein. 2020. β€œDirect Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.” Neuron 105 (3): 416–34.
Hinton, Geoffrey. n.d. β€œThe Forward-Forward Algorithm: Some Preliminary Investigations,” 17.
Hoel, Erik. 2021. β€œThe Overfitted Brain: Dreams Evolved to Assist Generalization.” Patterns 2 (5): 100244.
Lee, Jee Hang, Joel Z. Leibo, Su Jin An, and Sang Wan Lee. 2022. β€œImportance of prefrontal meta control in human-like reinforcement learning.” Frontiers in Computational Neuroscience 16 (December).
Lillicrap, Timothy P, and Adam Santoro. 2019. β€œBackpropagation Through Time and the Brain.” Current Opinion in Neurobiology, Machine Learning, Big Data, and Neuroscience, 55 (April): 82–89.
Ma, Wei Jin, Konrad Paul Kording, and Daniel Goldreich. 2022. Bayesian Models of Perception and Action.
Ma, Wei Ji, and Benjamin Peters. 2020. β€œA Neural Network Walks into a Lab: Towards Using Deep Nets as Models for Human Behavior.” arXiv:2005.02181 [Cs, q-Bio], May.
Meyniel, Florent, Mariano Sigman, and Zachary F. Mainen. 2015. β€œConfidence as Bayesian Probability: From Neural Origins to Behavior.” Neuron 88 (1): 78–92.
Millidge, Beren, Alexander Tschantz, and Christopher L. Buckley. 2020. β€œPredictive Coding Approximates Backprop Along Arbitrary Computation Graphs.” arXiv:2006.04182 [Cs], October.
Mitropolsky, Daniel, Michael J. Collins, and Christos H. Papadimitriou. 2021. β€œA Biologically Plausible Parser.” Transactions of the Association for Computational Linguistics 9: 1374–88.
Ororbia, Alexander, and Ankur Mali. 2023. β€œThe Predictive Forward-Forward Algorithm.”
Papadimitriou, Christos H., and Santosh S. Vempala. 2018. β€œRandom Projection in the Brain and Computation with Assemblies of Neurons.” In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019), edited by Avrim Blum, 124:57:1–19. Leibniz International Proceedings in Informatics (LIPIcs). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
Papadimitriou, Christos H., Santosh S. Vempala, Daniel Mitropolsky, Michael Collins, and Wolfgang Maass. 2020. β€œBrain computation by assemblies of neurons.” Proceedings of the National Academy of Sciences of the United States of America 117 (25): 14464–72.
Ren, Mengye, Simon Kornblith, Renjie Liao, and Geoffrey Hinton. 2022. β€œScaling Forward Gradient With Local Losses.” arXiv.
Robertazzi, Federica, Matteo Vissani, Guido Schillaci, and Egidio Falotico. 2022. β€œBrain-Inspired Meta-Reinforcement Learning Cognitive Control in Conflictual Inhibition Decision-Making Task for Artificial Agents.” Neural Networks 154 (October): 283–302.
Saxe, Andrew, Stephanie Nelli, and Christopher Summerfield. 2020. β€œIf Deep Learning Is the Answer, Then What Is the Question?” arXiv:2004.07580 [q-Bio], April.
Vanchurin, Vitaly, Yuri I. Wolf, Mikhail Katsnelson, and Eugene V. Koonin. 2021. β€œTowards a Theory of Evolution as Multilevel Learning.” Cold Spring Harbor Laboratory.
Wang, Jane X., Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Demis Hassabis, and Matthew Botvinick. 2018. β€œPrefrontal cortex as a meta-reinforcement learning system.” Nature Neuroscience 21 (6): 860–68.
Yuan, Lei, Violet Xiang, David Crandall, and Linda Smith. 2020. β€œLearning the generative principles of a symbol system from limited examples.” Cognition 200 (July): 104243.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.