Neurons

Neural networks made of real neurons, in functioning brains



How do brains work?

I mean, how do brains work at the level slightly higher than a synapse, but much lower than, e.g. psychology. β€œHow is thought done?” etc.

Notes pertaining to large, artificial networks are filed under artificial neural networks. The messy, biological end of the stick is here. Since brains seem to be the seat of the most flashy and important bit of the computing taking place in our bodies, we understandably want to know how they works, in order to

  • fix Alzheimers disease
  • steal cool learning tricks
  • endow the children of elites with superhuman mental prowess to cement their places as Übermenschen fit to rule the thousand year Reich
  • …or whatever.

Real brains are different to the β€œneuron-inspired” computation of the simulacrum in many ways, not just the usual difference between model and reality. The similitude between β€œneural networks” and neurons is intentionally weak for reasons of convenience.

For one example, most simulated neural networks are based on a continuous activation potential and discrete time, unlike spiking biological ones which are driven by discrete events in continuous time.

Also real brains support heterogeneous types of neuron, have messier layer organisation, use less power, don’t have well-defined backpropagation (or not in the same way), and many other things that I as a non-specialist do not know.

To learn more about:

  • Just saw a talk by Dan CireΘ™an in which he mentioned the importance of β€œFoveation” - blurring the edge of an image when training classifiers on it, thus encouraging us to ignore stuff in order to learn better. What is, rigorously speaking, happening there? Nice actual crossover between biological neural nets and fake ones.
  • Algorithmic statistics of neurons sounds interesting.
  • Modelling mind as machine learning.

How computationally complex is a neuron?

Empirically quantifying computation is hard, but people try to do it all the time for brains. Classics try to estimate structure in neural spike trains, (Crumiller et al. 2011; Haslinger, Klinkner, and Shalizi 2010; Nemenman, Bialek, and de Ruyter van Steveninck 2004) often by empirical entropy estimates.

If we are prepared to accept β€œsize of a neural network needed to approximate X” as an estimate of the complexity of X, then there are some interesting results: Allison Whitten, How Computationally Complex Is a Single Neuron? (Beniaguev, Segev, and London 2021). OTOH, finding the smallest neural network that can approximate something is itself computationally hard and not in general even easy to check.

Pretty pictures of neurons

The names I am looking for here for beutiful hand drawn early neuron diagrams are Camillo Golgi and Santiago RamΓ³n y Cajal, especially the latter.

References

AmigΓ³, JosΓ© M, Janusz SzczepaΕ„ski, Elek Wajnryb, and Maria V Sanchez-Vives. 2004. β€œEstimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity.” Neural Computation 16 (4): 717–36.
Barbieri, Riccardo, Michael C Quirk, Loren M Frank, Matthew A Wilson, and Emery N Brown. 2001. β€œConstruction and Analysis of Non-Poisson Stimulus-Response Models of Neural Spiking Activity.” Journal of Neuroscience Methods 105 (1): 25–37.
Beniaguev, David, Idan Segev, and Michael London. 2021. β€œSingle Cortical Neurons as Deep Artificial Neural Networks.” Neuron 109 (17): 2727–2739.e3.
Berwick, Robert C., Kazuo Okanoya, Gabriel J.L. Beckers, and Johan J. Bolhuis. 2011. β€œSongs to Syntax: The Linguistics of Birdsong.” Trends in Cognitive Sciences 15 (3): 113–21.
Brette, Romain. 2008. β€œGeneration of Correlated Spike Trains.” Neural Computation 0 (0): 080804143617793–28.
β€”β€”β€”. 2012. β€œComputing with Neural Synchrony.” PLoS Comput Biol 8 (6): e1002561.
Buhusi, Catalin V., and Warren H. Meck. 2005. β€œWhat Makes Us Tick? Functional and Neural Mechanisms of Interval Timing.” Nature Reviews Neuroscience 6 (10): 755–65.
Cadieu, C. F. 2014. β€œDeep Neural Networks Rival the Representation of Primate It Cortex for Core Visual Object Recognition.” PLoS Comp. Biol. 10: e1003963.
Carhart-Harris, Rl, and Dj Nutt. 2017. β€œSerotonin and Brain Function: A Tale of Two Receptors.” Journal of Psychopharmacology 31 (9): 1091–1120.
Castro, Fernando de. 2019. β€œCajal and the Spanish Neurological School: Neuroscience Would Have Been a Different Story Without Them.” Frontiers in Cellular Neuroscience 13.
Crumiller, Marshall, Bruce Knight, Yunguo Yu, and Ehud Kaplan. 2011. β€œEstimating the Amount of Information Conveyed by a Population of Neurons.” Frontiers in Neuroscience 5: 90.
Eden, U, L Frank, R Barbieri, V Solo, and E Brown. 2004. β€œDynamic Analysis of Neural Encoding by Point Process Adaptive Filtering.” Neural Computation 16 (5): 971–98.
Elman, Jeffrey L. 1990. β€œFinding Structure in Time.” Cognitive Science 14: 179–211.
β€”β€”β€”. 1993. β€œLearning and Development in Neural Networks: The Importance of Starting Small.” Cognition 48: 71–99.
Fee, Michale S, Alexay A Kozhevnikov, and Richard H Hahnloser. 2004. β€œNeural Mechanisms of Vocal Sequence Generation in the Songbird.” Annals of the New York Academy of Sciences 1016: 153–70.
FernΓ‘ndez, Pau, and Ricard V SolΓ©. 2007. β€œNeutral Fitness Landscapes in Signalling Networks.” Journal of The Royal Society Interface 4 (12): 41.
Freedman, David. 1999. β€œWald Lecture: On the Bernstein-von Mises Theorem with Infinite-Dimensional Parameters.” The Annals of Statistics 27 (4): 1119–41.
Glickstein, Mitch. 2006. β€œGolgi and Cajal: The neuron doctrine and the 100th anniversary of the 1906 Nobel Prize.” Current Biology 16 (5): R147–51.
Haslinger, Robert, Kristina Lisa Klinkner, and Cosma Rohilla Shalizi. 2010. β€œThe Computational Structure of Spike Trains.” Neural Computation 22 (1): 121–57.
Haslinger, Robert, Gordon Pipa, and Emery Brown. 2010. β€œDiscrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking.” Neural Computation 22 (10): 2477–2506.
Jin, Dezhe Z. 2009. β€œGenerating Variable Birdsong Syllable Sequences with Branching Chain Networks in Avian Premotor Nucleus HVC.” Physical Review E 80 (5): 051902.
Jin, Dezhe Z, and Alexay A Kozhevnikov. 2011. β€œA Compact Statistical Model of the Song Syntax in Bengalese Finch.” PLoS Comput Biol 7 (3): –1001108.
Jonas, Eric, and Konrad Paul Kording. 2017. β€œCould a Neuroscientist Understand a Microprocessor?” PLOS Computational Biology 13 (1): e1005268.
Kass, Robert E., Shun-Ichi Amari, Kensuke Arai, Emery N. Brown, Casey O. Diekman, Markus Diesmann, Brent Doiron, et al. 2018. β€œComputational Neuroscience: Mathematical and Statistical Perspectives.” Annual Review of Statistics and Its Application 5 (1): 183–214.
Katahira, Kentaro, Kenta Suzuki, Kazuo Okanoya, and Masato Okada. 2011. β€œComplex Sequencing Rules of Birdsong Can Be Explained by Simple Hidden Markov Processes.” PLoS ONE 6 (9): –24516.
Kay, Kenneth, Jason E. Chung, Marielena Sosa, Jonathan S. Schor, Mattias P. Karlsson, Margaret C. Larkin, Daniel F. Liu, and Loren M. Frank. 2020. β€œConstant Sub-second Cycling between Representations of Possible Futures in the Hippocampus.” Cell 180 (3): 552–567.e25.
Kutschireiter, Anna, Simone Carlo Surace, Henning Sprekeler, and Jean-Pascal Pfister. 2015a. β€œA Neural Implementation for Nonlinear Filtering.” arXiv Preprint arXiv:1508.06818.
Kutschireiter, Anna, Simone C Surace, Henning Sprekeler, and Jean-Pascal Pfister. 2015b. β€œApproximate Nonlinear Filtering with a Recurrent Neural Network.” BMC Neuroscience 16 (Suppl 1): P196.
Lee, Honglak, Alexis Battle, Rajat Raina, and Andrew Y. Ng. 2007. β€œEfficient Sparse Coding Algorithms.” Advances in Neural Information Processing Systems 19: 801.
Marcus, Gary, Adam Marblestone, and Thomas Dean. 2014. β€œThe atoms of neural computation.” Science 346 (6209): 551–52.
Nemenman, Ilya, William Bialek, and Rob de Ruyter van Steveninck. 2004. β€œEntropy and Information in Neural Spike Trains: Progress on the Sampling Problem.” Physical Review E 69 (5): 056111.
Olshausen, Bruno A., and David J. Field. 1996. β€œEmergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images.” Nature 381 (6583): 607–9.
Olshausen, Bruno A, and David J Field. 2004. β€œSparse Coding of Sensory Inputs.” Current Opinion in Neurobiology 14 (4): 481–87.
Orellana, Josue, Jordan Rodu, and Robert E. Kass. 2017. β€œPopulation Vectors Can Provide Near Optimal Integration of Information.” Neural Computation 29 (8): 2021–29.
Parallel Distributed Processing: Explorations in the Microstructure of Cognition. 1986. MIT Press.
Parr, Thomas, Dimitrije Markovic, Stefan J. Kiebel, and Karl J. Friston. 2019. β€œNeuronal Message Passing Using Mean-Field, Bethe, and Marginal Approximations.” Scientific Reports 9 (1): 1889.
SandkΓΌhler, J., and A. A. Eblen-Zajjur. 1994. β€œIdentification and Characterization of Rhythmic Nociceptive and Non-Nociceptive Spinal Dorsal Horn Neurons in the Rat.” Neuroscience 61 (4): 991–1006.
Sasahara, Kazutoshi, Martin L. Cody, David Cohen, and Charles E. Taylor. 2012. β€œStructural Design Principles of Complex Bird Songs: A Network-Based Approach.” PLoS ONE 7 (9): –44436.
Shen, Yanning, Brian Baingana, and Georgios B. Giannakis. 2016. β€œNonlinear Structural Vector Autoregressive Models for Inferring Effective Brain Network Connectivity.” arXiv:1610.06551 [Stat], October.
Simoncelli, Eero P, and Bruno A Olshausen. 2001. β€œNatural Image Statistics and Neural Representation.” Annual Review of Neuroscience 24 (1): 1193–1216.
Smith, A, and E Brown. 2003. β€œEstimating a State-Space Model from Point Process Observations.” Neural Computation 15 (5): 965–91.
Smith, Evan C., and Michael S. Lewicki. 2004. β€œLearning Efficient Auditory Codes Using Spikes Predicts Cochlear Filters.” In Advances in Neural Information Processing Systems, 1289–96.
β€”β€”β€”. 2006. β€œEfficient Auditory Coding.” Nature 439 (7079): 978–82.
Smith, Evan, and Michael S. Lewicki. 2005. β€œEfficient Coding of Time-Relative Structure Using Spikes.” Neural Computation 17 (1): 19–45.
Stolk, Arjen, Matthijs L. Noordzij, Lennart Verhagen, Inge Volman, Jan-Mathijs Schoffelen, Robert Oostenveld, Peter Hagoort, and Ivan Toni. 2014. β€œCerebral Coherence Between Communicators Marks the Emergence of Meaning.” Proceedings of the National Academy of Sciences 111 (51): 18183–88.
Strong, Steven P, Roland Koberle, Rob R de Ruyter van Steveninck, and William Bialek. 1998. β€œEntropy and Information in Neural Spike Trains.” Phys. Rev.Β Lett. 80 (1): 197–200.
Vargas-Irwin, Carlos E., David M. Brandman, Jonas B. Zimmermann, John P. Donoghue, and Michael J. Black. 2015. β€œSpike Train SIMilarity Space (SSIMS): A Framework for Single Neuron and Ensemble Data Analysis.” Neural Computation 27 (1): 1–31.
Volgushev, Maxim, Vladimir Ilin, and Ian H. Stevenson. 2015. β€œIdentifying and Tracking Simulated Synaptic Inputs from Neuronal Firing: Insights from In Vitro Experiments.” PLoS Computational Biology 11 (3).
Zeki, Semir, John Paul Romaya, Dionigi M. T. Benincasa, and Michael F. Atiyah. 2014. β€œThe experience of mathematical beauty and its neural correlates.” Frontiers in Human Neuroscience 8.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.