Auer, Peter, Harald Burgsteiner, and Wolfgang Maass. 2008.
“A Learning Rule for Very Simple Universal Approximators Consisting of a Single Layer of Perceptrons.” Neural Networks 21 (5): 786–95.
https://doi.org/10.1016/j.neunet.2007.12.036.
Baldi, Pierre, Peter Sadowski, and Zhiqin Lu. 2016.
“Learning in the Machine: Random Backpropagation and the Learning Channel.” December 8, 2016.
http://arxiv.org/abs/1612.02734.
Cao, Feilong, Dianhui Wang, Houying Zhu, and Yuguang Wang. 2016.
“An Iterative Learning Algorithm for Feedforward Neural Networks with Random Weights.” Information Sciences 328: 546–57.
https://doi.org/10.1016/j.ins.2015.09.002.
Charles, Adam, Dong Yin, and Christopher Rozell. 2016.
“Distributed Sequence Memory of Multidimensional Inputs in Recurrent Networks.” May 26, 2016.
http://arxiv.org/abs/1605.08346.
Cover, T. M. 1965.
“Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition.” IEEE Transactions on Electronic Computers EC-14 (3): 326–34.
https://doi.org/10.1109/PGEC.1965.264137.
Giryes, R., G. Sapiro, and A. M. Bronstein. 2016.
“Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?” IEEE Transactions on Signal Processing 64 (13): 3444–57.
https://doi.org/10.1109/TSP.2016.2546221.
Globerson, Amir, and Roi Livni. 2016.
“Learning Infinite-Layer Networks: Beyond the Kernel Trick.” June 16, 2016.
http://arxiv.org/abs/1606.05316.
Goudarzi, Alireza, Peter Banda, Matthew R. Lakin, Christof Teuscher, and Darko Stefanovic. 2014.
“A Comparative Study of Reservoir Computing for Temporal Signal Processing.” January 9, 2014.
http://arxiv.org/abs/1401.2224.
Goudarzi, Alireza, and Christof Teuscher. 2016.
“Reservoir Computing: Quo Vadis?” In
Proceedings of the 3rd ACM International Conference on Nanoscale Computing and Communication, 13:1–6.
NANOCOM’16.
New York, NY, USA:
ACM.
https://doi.org/10.1145/2967446.2967448.
Grzyb, B. J., E. Chinellato, G. M. Wojcik, and W. A. Kaminski. 2009.
“Which Model to Use for the Liquid State Machine?” In
2009 International Joint Conference on Neural Networks, 1018–24.
https://doi.org/10.1109/IJCNN.2009.5178822.
Hazan, Hananel, and Larry M. Manevitz. 2012.
“Topological Constraints and Robustness in Liquid State Machines.” Expert Systems with Applications 39 (2): 1597–1606.
https://doi.org/10.1016/j.eswa.2011.06.052.
He, Kun, Yan Wang, and John Hopcroft. 2016.
“A Powerful Generative Model Using Random Weights for the Deep Image Representation.” In
Advances in Neural Information Processing Systems.
http://arxiv.org/abs/1606.04801.
Huang, Guang-Bin, and Chee-Kheong Siew. 2005.
“Extreme Learning Machine with Randomly Assigned RBF Kernels.” International Journal of Information Technology 11 (1): 16–24.
http://pop.intjit.org/journal/volume/11/1/111_2.pdf.
Huang, Guang-Bin, Qin-Yu Zhu, and Chee-Kheong Siew. 2004.
“Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks.” In
2004 IEEE International Joint Conference on Neural Networks, 2004. Proceedings, 2:985–990 vol.2.
https://doi.org/10.1109/IJCNN.2004.1380068.
———. 2006.
“Extreme Learning Machine: Theory and Applications.” Neurocomputing, Neural
Networks Selected Papers from the 7th
Brazilian Symposium on
Neural Networks (
SBRN ’04) 7th
Brazilian Symposium on
Neural Networks, 70 (1–3): 489–501.
https://doi.org/10.1016/j.neucom.2005.12.126.
Li, Ming, and Dianhui Wang. 2017.
“Insights into Randomized Algorithms for Neural Networks: Practical Issues and Common Pitfalls.” Information Sciences 382–383 (March): 170–78.
https://doi.org/10.1016/j.ins.2016.12.007.
Lukoševičius, Mantas, and Herbert Jaeger. 2009.
“Reservoir Computing Approaches to Recurrent Neural Network Training.” Computer Science Review 3 (3): 127–49.
https://doi.org/10.1016/j.cosrev.2009.03.005.
Maass, W., T. Natschläger, and H. Markram. 2004.
“Computational Models for Generic Cortical Microcircuits.” In
Computational Neuroscience: A Comprehensive Approach, 575–605.
Chapman & Hall/CRC.
http://www.igi.tu-graz.ac.at/maass/psfiles/149-v05.pdf.
Martinsson, Per-Gunnar. 2016.
“Randomized Methods for Matrix Computations and Analysis of High Dimensional Data.” July 6, 2016.
http://arxiv.org/abs/1607.01649.
Oyallon, Edouard, Eugene Belilovsky, and Sergey Zagoruyko. 2017.
“Scaling the Scattering Transform: Deep Hybrid Networks.” 2017.
https://arxiv.org/abs/1703.08961.
Rahimi, Ali, and Benjamin Recht. 2009.
“Weighted Sums of Random Kitchen Sinks: Replacing Minimization with Randomization in Learning.” In
Advances in Neural Information Processing Systems, 1313–20.
Curran Associates, Inc. http://papers.nips.cc/paper/3495-weighted-sums-of-random-kitchen-sinks-replacing-minimization-with-randomization-in-learning.
Scardapane, Simone, and Dianhui Wang. 2017.
“Randomness in Neural Networks: An Overview.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 7 (2).
https://doi.org/10.1002/widm.1200.
Steil, J. J. 2004.
“Backpropagation-Decorrelation: Online Recurrent Learning with O(N) Complexity.” In
2004 IEEE International Joint Conference on Neural Networks, 2004. Proceedings, 2:843–848 vol.2.
https://doi.org/10.1109/IJCNN.2004.1380039.
Taylor, Graham W., Geoffrey E. Hinton, and Sam T. Roweis. 2006.
“Modeling Human Motion Using Binary Latent Variables.” In
Advances in Neural Information Processing Systems, 1345–52.
http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2006_693.pdf.
Tong, Matthew H., Adam D. Bickett, Eric M. Christiansen, and Garrison W. Cottrell. 2007.
“Learning Grammatical Structure with Echo State Networks.” Neural Networks 20 (3): 424–32.
https://doi.org/10.1016/j.neunet.2007.04.013.
Triefenbach, F., A. Jalalvand, K. Demuynck, and J. P. Martens. 2013.
“Acoustic Modeling With Hierarchical Reservoirs.” IEEE Transactions on Audio, Speech, and Language Processing 21 (11): 2439–50.
https://doi.org/10.1109/TASL.2013.2280209.
Zhang, Le, and P. N. Suganthan. 2016.
“A Survey of Randomized Algorithms for Training Neural Networks.” Information Sciences 364–365 (C): 146–55.
https://doi.org/10.1016/j.ins.2016.01.039.
No comments yet!