Neural networks made of real neurons, in functioning brains

How do brains work?

I mean, how do brains work at the level slightly higher than a synapse, but much lower than, e.g. psychology. “How is thought done?” etc.

Notes pertaining to large, artificial networks are filed under artificial neural networks. The messy, biological end of the stick is here. Since brains seem to be the seat of the most flashy and important bit of the computing taking place in our bodies, we understandably want to know how they works, in order to

  • fix Alzheimers disease
  • steal cool learning tricks
  • endow the children of elites with superhuman mental prowess to cement their places as Übermenschen fit to rule the thousand year Reich
  • …or whatever.

Real brains are different to the “neuron-inspired” computation of the simulacrum in many ways, not just the usual difference between model and reality. The similitude between “neural networks” and neurons is intentionally weak for reasons of convenience.

For one example, most simulated neural networks are based on a continuous activation potential and discrete time, unlike spiking biological ones which are driven by discrete events in continuous time.

Also real brains support heterogeneous types of neuron, have messier layer organisation, use less power, don’t have well-defined backpropagation (or not in the same way), and many other things that I as a non-specialist do not know.

To learn more about:

  • Just saw a talk by Dan Cireșan in which he mentioned the importance of “Foveation” - blurring the edge of an image when training classifiers on it, thus encouraging us to ignore stuff in order to learn better. What is, rigorously speaking, happening there? Nice actual crossover between biological neural nets and fake ones.
  • Algorithmic statistics of neurons sounds interesting.
  • Modelling mind as machine learning.

How computationally complex is a neuron?

Empirically quantifying computation is hard, but people try to do it all the time for brains. Classics try to estimate structure in neural spike trains, (Crumiller et al. 2011; Haslinger, Klinkner, and Shalizi 2010; Nemenman, Bialek, and de Ruyter van Steveninck 2004) often by empirical entropy estimates.

If we are prepared to accept “size of a neural network needed to approximate X” as an estimate of the complexity of X, then there are some interesting results: Allison Whitten, How Computationally Complex Is a Single Neuron? (Beniaguev, Segev, and London 2021). OTOH, finding the smallest neural network that can approximate something is itself computationally hard and not in general even easy to check.


Amigó, José M, Janusz Szczepański, Elek Wajnryb, and Maria V Sanchez-Vives. 2004. “Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity.” Neural Computation 16 (4): 717–36.
Barbieri, Riccardo, Michael C Quirk, Loren M Frank, Matthew A Wilson, and Emery N Brown. 2001. “Construction and Analysis of Non-Poisson Stimulus-Response Models of Neural Spiking Activity.” Journal of Neuroscience Methods 105 (1): 25–37.
Beniaguev, David, Idan Segev, and Michael London. 2021. “Single Cortical Neurons as Deep Artificial Neural Networks.” Neuron 109 (17): 2727–2739.e3.
Berwick, Robert C., Kazuo Okanoya, Gabriel J.L. Beckers, and Johan J. Bolhuis. 2011. “Songs to Syntax: The Linguistics of Birdsong.” Trends in Cognitive Sciences 15 (3): 113–21.
Brette, Romain. 2008. “Generation of Correlated Spike Trains.” Neural Computation 0 (0): 080804143617793–28.
———. 2012. “Computing with Neural Synchrony.” PLoS Comput Biol 8 (6): e1002561.
Buhusi, Catalin V., and Warren H. Meck. 2005. “What Makes Us Tick? Functional and Neural Mechanisms of Interval Timing.” Nature Reviews Neuroscience 6 (10): 755–65.
Cadieu, C. F. 2014. “Deep Neural Networks Rival the Representation of Primate It Cortex for Core Visual Object Recognition.” PLoS Comp. Biol. 10: e1003963.
Carhart-Harris, Rl, and Dj Nutt. 2017. “Serotonin and Brain Function: A Tale of Two Receptors.” Journal of Psychopharmacology 31 (9): 1091–1120.
Crumiller, Marshall, Bruce Knight, Yunguo Yu, and Ehud Kaplan. 2011. “Estimating the Amount of Information Conveyed by a Population of Neurons.” Frontiers in Neuroscience 5: 90.
Eden, U, L Frank, R Barbieri, V Solo, and E Brown. 2004. “Dynamic Analysis of Neural Encoding by Point Process Adaptive Filtering.” Neural Computation 16 (5): 971–98.
Elman, Jeffrey L. 1990. “Finding Structure in Time.” Cognitive Science 14: 179–211.
———. 1993. “Learning and Development in Neural Networks: The Importance of Starting Small.” Cognition 48: 71–99.
Fee, Michale S, Alexay A Kozhevnikov, and Richard H Hahnloser. 2004. “Neural Mechanisms of Vocal Sequence Generation in the Songbird.” Annals of the New York Academy of Sciences 1016: 153–70.
Fernández, Pau, and Ricard V Solé. 2007. “Neutral Fitness Landscapes in Signalling Networks.” Journal of The Royal Society Interface 4 (12): 41.
Haslinger, Robert, Kristina Lisa Klinkner, and Cosma Rohilla Shalizi. 2010. “The Computational Structure of Spike Trains.” Neural Computation 22 (1): 121–57.
Haslinger, Robert, Gordon Pipa, and Emery Brown. 2010. “Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking.” Neural Computation 22 (10): 2477–2506.
Jin, Dezhe Z. 2009. “Generating Variable Birdsong Syllable Sequences with Branching Chain Networks in Avian Premotor Nucleus HVC.” Physical Review E 80 (5): 051902.
Jin, Dezhe Z, and Alexay A Kozhevnikov. 2011. “A Compact Statistical Model of the Song Syntax in Bengalese Finch.” PLoS Comput Biol 7 (3): –1001108.
Jonas, Eric, and Konrad Paul Kording. 2017. “Could a Neuroscientist Understand a Microprocessor?” PLOS Computational Biology 13 (1): e1005268.
Kass, Robert E., Shun-Ichi Amari, Kensuke Arai, Emery N. Brown, Casey O. Diekman, Markus Diesmann, Brent Doiron, et al. 2018. “Computational Neuroscience: Mathematical and Statistical Perspectives.” Annual Review of Statistics and Its Application 5 (1): 183–214.
Katahira, Kentaro, Kenta Suzuki, Kazuo Okanoya, and Masato Okada. 2011. “Complex Sequencing Rules of Birdsong Can Be Explained by Simple Hidden Markov Processes.” PLoS ONE 6 (9): –24516.
Kay, Kenneth, Jason E. Chung, Marielena Sosa, Jonathan S. Schor, Mattias P. Karlsson, Margaret C. Larkin, Daniel F. Liu, and Loren M. Frank. 2020. “Constant Sub-second Cycling between Representations of Possible Futures in the Hippocampus.” Cell 180 (3): 552–567.e25.
Kutschireiter, Anna, Simone Carlo Surace, Henning Sprekeler, and Jean-Pascal Pfister. 2015a. “A Neural Implementation for Nonlinear Filtering.” arXiv Preprint arXiv:1508.06818.
Kutschireiter, Anna, Simone C Surace, Henning Sprekeler, and Jean-Pascal Pfister. 2015b. “Approximate Nonlinear Filtering with a Recurrent Neural Network.” BMC Neuroscience 16 (Suppl 1): P196.
Lee, Honglak, Alexis Battle, Rajat Raina, and Andrew Y. Ng. 2007. “Efficient Sparse Coding Algorithms.” Advances in Neural Information Processing Systems 19: 801.
Marcus, Gary, Adam Marblestone, and Thomas Dean. 2014. “The atoms of neural computation.” Science 346 (6209): 551–52.
Nemenman, Ilya, William Bialek, and Rob de Ruyter van Steveninck. 2004. “Entropy and Information in Neural Spike Trains: Progress on the Sampling Problem.” Physical Review E 69 (5): 056111.
Olshausen, Bruno A., and David J. Field. 1996. “Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images.” Nature 381 (6583): 607–9.
Olshausen, Bruno A, and David J Field. 2004. “Sparse Coding of Sensory Inputs.” Current Opinion in Neurobiology 14 (4): 481–87.
Orellana, Josue, Jordan Rodu, and Robert E. Kass. 2017. “Population Vectors Can Provide Near Optimal Integration of Information.” Neural Computation 29 (8): 2021–29.
Parallel Distributed Processing: Explorations in the Microstructure of Cognition. 1986. MIT Press.
Sandkühler, J., and A. A. Eblen-Zajjur. 1994. “Identification and Characterization of Rhythmic Nociceptive and Non-Nociceptive Spinal Dorsal Horn Neurons in the Rat.” Neuroscience 61 (4): 991–1006.
Sasahara, Kazutoshi, Martin L. Cody, David Cohen, and Charles E. Taylor. 2012. “Structural Design Principles of Complex Bird Songs: A Network-Based Approach.” PLoS ONE 7 (9): –44436.
Shen, Yanning, Brian Baingana, and Georgios B. Giannakis. 2016. “Nonlinear Structural Vector Autoregressive Models for Inferring Effective Brain Network Connectivity.” arXiv:1610.06551 [stat], October.
Simoncelli, Eero P, and Bruno A Olshausen. 2001. “Natural Image Statistics and Neural Representation.” Annual Review of Neuroscience 24 (1): 1193–1216.
Smith, A, and E Brown. 2003. “Estimating a State-Space Model from Point Process Observations.” Neural Computation 15 (5): 965–91.
Smith, Evan C., and Michael S. Lewicki. 2004. “Learning Efficient Auditory Codes Using Spikes Predicts Cochlear Filters.” In Advances in Neural Information Processing Systems, 1289–96.
———. 2006. “Efficient Auditory Coding.” Nature 439 (7079): 978–82.
Smith, Evan, and Michael S. Lewicki. 2005. “Efficient Coding of Time-Relative Structure Using Spikes.” Neural Computation 17 (1): 19–45.
Stolk, Arjen, Matthijs L. Noordzij, Lennart Verhagen, Inge Volman, Jan-Mathijs Schoffelen, Robert Oostenveld, Peter Hagoort, and Ivan Toni. 2014. “Cerebral Coherence Between Communicators Marks the Emergence of Meaning.” Proceedings of the National Academy of Sciences 111 (51): 18183–88.
Strong, Steven P, Roland Koberle, Rob R de Ruyter van Steveninck, and William Bialek. 1998. “Entropy and Information in Neural Spike Trains.” Phys. Rev. Lett. 80 (1): 197–200.
Vargas-Irwin, Carlos E., David M. Brandman, Jonas B. Zimmermann, John P. Donoghue, and Michael J. Black. 2015. “Spike Train SIMilarity Space (SSIMS): A Framework for Single Neuron and Ensemble Data Analysis.” Neural Computation 27 (1): 1–31.
Volgushev, Maxim, Vladimir Ilin, and Ian H. Stevenson. 2015. “Identifying and Tracking Simulated Synaptic Inputs from Neuronal Firing: Insights from In Vitro Experiments.” PLoS Computational Biology 11 (3).
Zeki, Semir, John Paul Romaya, Dionigi M. T. Benincasa, and Michael F. Atiyah. 2014. “The experience of mathematical beauty and its neural correlates.” Frontiers in Human Neuroscience 8.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.