If you do not bother to train your neural net, what happens? In the infinite-width limit you get a Gaussian process. There are a number of net architectures which do not make use of that argument and which are still random.

## Recurrent: Echo State Machines / Random reservoir networks

This sounds deliciously lazy; At a glance it sounds like the process is to construct a random recurrent network, i.e. a network of random saturating IIR filters. Let the network converge to a steady state for a given stimulus. These are the features to which you fit your classifier/regressor/etc.

Easy to implement, that. I wonder when it actually works, constraints on topology etc.

Some of the literature claims these are be based on spiking (i.e. even-driven) models, but AFAICT this is not necessary, although it might be convenient for convergence.

Various claims are made about how hard they avoid the training difficulty of similarly basic RNNs by being essentially untrained; you use them as a feature factory for another supervised output algorithm.

Suggestive parallel with random projections. Not strictly recurrent, but same general idea: He, Wang, and Hopcroft (2016).

LukoΕ‘eviΔius and Jaeger (2009) mapped out various types as at 2009:.

From a dynamical systems perspective, there are two main classes of RNNs. Models from the first class are characterized by an energy-minimizing stochastic dynamics and symmetric connections. The best known instantiations are Hopfield networks, Boltzmann machines, and the recently emerging Deep Belief Networks. These networks are mostly trained in some unsupervised learning scheme. Typical targeted network functionalities in this field are associative memories, data compression, the unsupervised modeling of data distributions, and static pattern classification, where the model is run for multiple time steps per single input instance to reach some type of convergence or equilibrium (but see e.g., Taylor, Hinton, and Roweis (2006) for extension to temporal data). The mathematical background is rooted in statistical physics. In contrast, the second big class of RNN models typically features a deterministic update dynamics and directed connections. Systems from this class implement nonlinear filters, which transform an input time series into an output time series. The mathematical background is nonlinear dynamical systems. The standard training mode is supervised.

## Non-random reservoir computing

See Reservoir computing.

## Random convolutions

π

## References

*Neural Networks*21 (5): 786β95.

*arXiv:1612.02734 [Cs]*, December.

*Information Sciences*328: 546β57.

*arXiv:1605.08346 [Cs, Math, Stat]*, May.

*IEEE Transactions on Electronic Computers*EC-14 (3): 326β34.

*Nature Communications*12 (1): 5564.

*IEEE Transactions on Signal Processing*64 (13): 3444β57.

*arXiv:1606.05316 [Cs]*, June.

*arXiv:1401.2224 [Cs]*, January.

*Proceedings of the 3rd ACM International Conference on Nanoscale Computing and Communication*, 13:1β6. NANOCOMβ16. New York, NY, USA: ACM.

*2009 International Joint Conference on Neural Networks*, 1018β24.

*Expert Systems with Applications*39 (2): 1597β1606.

*Advances in Neural Information Processing Systems*.

*International Journal of Information Technology*11 (1): 16β24.

*2004 IEEE International Joint Conference on Neural Networks, 2004. Proceedings*, 2:985β990 vol.2.

*Neurocomputing*, Neural Networks Selected Papers from the 7th Brazilian Symposium on Neural Networks (SBRN β04) 7th Brazilian Symposium on Neural Networks, 70 (1β3): 489β501.

*Information Sciences*382β383 (March): 170β78.

*Computer Science Review*3 (3): 127β49.

*Computational Neuroscience: A Comprehensive Approach*, 575β605. Chapman & Hall/CRC.

*arXiv:1607.01649 [Math]*, July.

*arXiv Preprint arXiv:1703.08961*.

*Physical Review Letters*120 (2): 024102.

*Chaos: An Interdisciplinary Journal of Nonlinear Science*27 (12): 121102.

*Medium*(blog).

*Advances in Neural Information Processing Systems*, 1313β20. Curran Associates, Inc.

*Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery*7 (2).

*2004 IEEE International Joint Conference on Neural Networks, 2004. Proceedings*, 2:843β848 vol.2.

*Advances in Neural Information Processing Systems*, 1345β52.

*Neural Networks*20 (3): 424β32.

*IEEE Transactions on Audio, Speech, and Language Processing*21 (11): 2439β50.

*Information Sciences*364β365 (C): 146β55.

## No comments yet. Why not leave one?