Random neural networks

February 17, 2017 β€” October 12, 2021

Figure 1

If you do not bother to train your neural net, what happens? In the infinite-width limit you get a Gaussian process. There are a number of net architectures which do not make use of that argument and which are still random.

1 Recurrent: Echo State Machines / Random reservoir networks

This sounds deliciously lazy; At a glance it sounds like the process is to construct a random recurrent network, i.e. a network of random saturating IIR filters. Let the network converge to a steady state for a given stimulus. These are the features to which you fit your classifier/regressor/etc.

Easy to implement, that. I wonder when it actually works, constraints on topology etc.

Some of the literature claims these are be based on spiking (i.e. even-driven) models, but AFAICT this is not necessary, although it might be convenient for convergence.

Various claims are made about how hard they avoid the training difficulty of similarly basic RNNs by being essentially untrained; you use them as a feature factory for another supervised output algorithm.

Suggestive parallel with random projections. Not strictly recurrent, but same general idea: He, Wang, and Hopcroft (2016).

Lukoőevičius and Jaeger (2009) mapped out various types as at 2009:.

From a dynamical systems perspective, there are two main classes of RNNs. Models from the first class are characterized by an energy-minimizing stochastic dynamics and symmetric connections. The best known instantiations are Hopfield networks, Boltzmann machines, and the recently emerging Deep Belief Networks. These networks are mostly trained in some unsupervised learning scheme. Typical targeted network functionalities in this field are associative memories, data compression, the unsupervised modeling of data distributions, and static pattern classification, where the model is run for multiple time steps per single input instance to reach some type of convergence or equilibrium (but see e.g., Taylor, Hinton, and Roweis (2006) for extension to temporal data). The mathematical background is rooted in statistical physics. In contrast, the second big class of RNN models typically features a deterministic update dynamics and directed connections. Systems from this class implement nonlinear filters, which transform an input time series into an output time series. The mathematical background is nonlinear dynamical systems. The standard training mode is supervised.

2 Non-random reservoir computing

See Reservoir computing.

3 Random convolutions

πŸ—

4 References

Auer, Burgsteiner, and Maass. 2008. β€œA Learning Rule for Very Simple Universal Approximators Consisting of a Single Layer of Perceptrons.” Neural Networks.
Baldi, Sadowski, and Lu. 2016. β€œLearning in the Machine: Random Backpropagation and the Learning Channel.” arXiv:1612.02734 [Cs].
Cao, Wang, Zhu, et al. 2016. β€œAn Iterative Learning Algorithm for Feedforward Neural Networks with Random Weights.” Information Sciences.
Charles, Yin, and Rozell. 2016. β€œDistributed Sequence Memory of Multidimensional Inputs in Recurrent Networks.” arXiv:1605.08346 [Cs, Math, Stat].
Cover. 1965. β€œGeometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition.” IEEE Transactions on Electronic Computers.
Gauthier, Bollt, Griffith, et al. 2021. β€œNext Generation Reservoir Computing.” Nature Communications.
Gilpin. 2023. β€œModel Scale Versus Domain Knowledge in Statistical Forecasting of Chaotic Systems.” Physical Review Research.
Giryes, Sapiro, and Bronstein. 2016. β€œDeep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?” IEEE Transactions on Signal Processing.
Globerson, and Livni. 2016. β€œLearning Infinite-Layer Networks: Beyond the Kernel Trick.” arXiv:1606.05316 [Cs].
Goudarzi, Banda, Lakin, et al. 2014. β€œA Comparative Study of Reservoir Computing for Temporal Signal Processing.” arXiv:1401.2224 [Cs].
Goudarzi, and Teuscher. 2016. β€œReservoir Computing: Quo Vadis?” In Proceedings of the 3rd ACM International Conference on Nanoscale Computing and Communication. NANOCOM’16.
Grzyb, Chinellato, Wojcik, et al. 2009. β€œWhich Model to Use for the Liquid State Machine?” In 2009 International Joint Conference on Neural Networks.
Hazan, and Manevitz. 2012. β€œTopological Constraints and Robustness in Liquid State Machines.” Expert Systems with Applications.
He, Wang, and Hopcroft. 2016. β€œA Powerful Generative Model Using Random Weights for the Deep Image Representation.” In Advances in Neural Information Processing Systems.
Huang, and Siew. 2005. β€œExtreme Learning Machine with Randomly Assigned RBF Kernels.” International Journal of Information Technology.
Huang, Zhu, and Siew. 2004. β€œExtreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks.” In 2004 IEEE International Joint Conference on Neural Networks, 2004. Proceedings.
β€”β€”β€”. 2006. β€œExtreme Learning Machine: Theory and Applications.” Neurocomputing, Neural Networks Selected Papers from the 7th Brazilian Symposium on Neural Networks (SBRN ’04) 7th Brazilian Symposium on Neural Networks,.
Li, and Wang. 2017. β€œInsights into Randomized Algorithms for Neural Networks: Practical Issues and Common Pitfalls.” Information Sciences.
LukoΕ‘evičius, and Jaeger. 2009. β€œReservoir Computing Approaches to Recurrent Neural Network Training.” Computer Science Review.
Maass, NatschlΓ€ger, and Markram. 2004. β€œComputational Models for Generic Cortical Microcircuits.” In Computational Neuroscience: A Comprehensive Approach.
Martinsson. 2016. β€œRandomized Methods for Matrix Computations and Analysis of High Dimensional Data.” arXiv:1607.01649 [Math].
Oyallon, Belilovsky, and Zagoruyko. 2017. β€œScaling the Scattering Transform: Deep Hybrid Networks.” arXiv Preprint arXiv:1703.08961.
Pathak, Hunt, Girvan, et al. 2018. β€œModel-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach.” Physical Review Letters.
Pathak, Lu, Hunt, et al. 2017. β€œUsing Machine Learning to Replicate Chaotic Attractors and Calculate Lyapunov Exponents from Data.” Chaos: An Interdisciplinary Journal of Nonlinear Science.
Perez. 2016. β€œDeep Learning: The Unreasonable Effectiveness of Randomness.” Medium (blog).
Rahimi, and Recht. 2009. β€œWeighted Sums of Random Kitchen Sinks: Replacing Minimization with Randomization in Learning.” In Advances in Neural Information Processing Systems.
Scardapane, and Wang. 2017. β€œRandomness in Neural Networks: An Overview.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery.
Sompolinsky, Crisanti, and Sommers. 1988. β€œChaos in Random Neural Networks.” Physical Review Letters.
Steil. 2004. β€œBackpropagation-Decorrelation: Online Recurrent Learning with O(N) Complexity.” In 2004 IEEE International Joint Conference on Neural Networks, 2004. Proceedings.
Taylor, Hinton, and Roweis. 2006. β€œModeling Human Motion Using Binary Latent Variables.” In Advances in Neural Information Processing Systems.
Tong, Bickett, Christiansen, et al. 2007. β€œLearning Grammatical Structure with Echo State Networks.” Neural Networks.
Triefenbach, Jalalvand, Demuynck, et al. 2013. β€œAcoustic Modeling With Hierarchical Reservoirs.” IEEE Transactions on Audio, Speech, and Language Processing.
Zhang, and Suganthan. 2016. β€œA Survey of Randomized Algorithms for Training Neural Networks.” Information Sciences.