Neural nets

designing the fanciest usable differentiable loss surface

Bjorn Stenger’s brief history of machine learning.

Modern computational neural network methods reascend the hype phase transition. a.k.a deep learning or double plus fancy brainbots or please give the department have a bigger GPU budget it’s not to play video games I swear.

I don’t intend to write an introduction to deep learning here; that ground has been tilled already.

But here are some handy links to resources I frequently use and a bit of under-discussed background.


To be specific, deep learning is

  • a library of incremental improvements in areas such as Stochastic Gradient Descent, approximation theory, graphical models, and signal processing research, plus some handy advancements in SIMD architectures that, taken together, surprisingly elicit the kind of results from machine learning that everyone was hoping we’d get by at least 20 years ago, yet without requiring us to develop substantially more clever grad students to do so, or,
  • the state-of-the-art in artificial kitten recognition.
  • a metatstatizing buzzword

It’s a frothy (some might say foamy-mouthed) research bubble right now, with such cuteness at the extrema as, e.g. Inceptionising inceptionism (Andrychowicz et al. 2016) which learns to learn neural networks using neural networks. (well, it sort of does that, but is a long way from a bootstrapping general AI) Stay tuned for more of this.

There is not much to do with “neurons” left in the paradigm at this stage. What there is, is a bundle of clever tricks for training deep constrained hierarchical predictors and classifiers on modern computer hardware. Something closer to a convenient technology stack than a single “theory”.

Some network methods hew closer to behaviour of real neurons, although not that close; simulating actual brains is a different discipline with only intermittent and indirect connection.

Subtopics of interest to me:

Why bother?

There are many answers here.

The ultimate regression algorithm

…until the next ultimate regression algorithm.

It turns out that this particular learning model (class of learning models) and training technologies is surprisingly good at getting every better models out of ever more data. Why burn three grad students on a perfect tractable and specific regression algorithm when you can use one algorithm to solve a whole bunch of regression problems, and which improves with the number of computers and the amount of data you have? How much of a relief is it to capital to decouple its effectiveness from the uncertainty and obstreperousness of human labour?

Cool maths

Function approximations, interesting manifold inference. Weird product measure things, e.g. (Montufar 2014).

Even the stuff I’d assumed was trivial, like backpropagation, has a few wrinkles in practice. See Michael Nielson’s chapter and Chrisopher Olah’s visual summary.

Yes, this is a regular paper mill. Not only are there probably new insights to be had here, but also you can recycle any old machine learning insight, replace a layer in a network with that and poof — new paper.

Insight into the mind

🏗 Maybe.

There claims to be communication between real neurology and neural networks in computer vision, but elsewhere neural networks are driven by their similarities to other things, such as being differentiable relaxations of traditional models, (differentiable stack machines!) or of being license to fit hierarchical models without regard for statistical niceties.

There might be some kind of occasional “stylised fact”-type relationship here.

Trippy art projects

See generative art and neural networks

Hip keywords for NN models

Not necessarily mutually exclusive; some design patterns you can use.

There are many summaries floating around here. Some that I looked at are Tomasz Malisiewicz’s summary of Deep Learning Trends @ ICLR 2016, or the Neural network zoo or Simon Brugman’s deep learning papers.

Some of these are descriptions of topologies, others of training tricks or whatever. Recurrent and convolutional are two types of topologies you might have in your ANN. But there are so many other possible ones: “Grid”, “highway”, “Turing” others…

Many are mentioned in passing in David McAllester’s Cognitive Architectures post.


See probabilistic Neural Networks.


See the convnets entry.

Generative Adversarial Networks

Train two networks to beat each other.

Recurrent neural networks

Feedback neural networks structures to have with memory and a notion of time and “current” versus “past” state. See recurrent neural networks.

Grid and other axial tricks

A mini-genre. Kalchbrenner, Danihelka, and Graves (2016) connect recurrent cells across multiple axes, leading to a higher-rank MIMO system; This is natural in spatial random fields, and I am amazed it was uncommon enough to need formalizing in a paper; but apparently it was and it did.

Transfer learning

I have seen two versions of this term.

One starts from teh idea that if you ahve a network that solves one, say, computer vision problem, its latent features might solve another computer vision problem very well. This is the Recycling someone else’s features framing. I don’t know why this has a special term - I think it’s so that you can claim to do “end-to-end” learning, but then actually do what everyone else as done forever and works totally OK, which is to re-use other people’s work like real scientists.

The other version is you would like to do domain adaptation, which is to say, to learn across one dataset but then make predictions on a different dataset. I describe that problem as external validity.

These two things can clearly be related if you squint hard, but honestly, why use transfer learning for the second sense? It already has so many names that it need no more, being also known as Dataset shift, covariate shift, transferable learning and maybe other things, since it is a fundamental problem in statistics generally outside the domain of neural nets.

Attention mechanism

See Attention mechanism.


Most simulated neural networks are based on a continuous activation potential and discrete time, unlike spiking biological ones, which are driven by discrete events in continuous time. There are a great many other differences (to real biology). What difference does this in particular make? I suspect it means that time is handled different.

Kernel networks

Kernel trick + ANN = kernel ANNs.

(Stay tuned for reframing more things as deep learning.)

I think this is what convex networks are also?

Francis Bach:

I’m sure the brain totes does this

Bengio, Le Roux, Vincent, Delalleau, and Marcotte, 2006.

AFAICT these all boil down to rebadged extensions of Gaussian processes but maybe I’m missing something?


🏗 Making a sparse encoding of something by demanding your network reproduces the after passing the network activations through a narrow bottleneck. Many flavours.

Optimisation methods

Backpropagation plus stochastic gradient descent rules at the moment.

Does anything else get performance at this scale? What other techniques can be extracted from variational inference or MC sampling, or particle filters, since there is no clear reason that shoving any of these in as intermediate layers in the network is any less well-posed than a classical backprop layer? Although it does require more nous from the enthusiastic grad student.

Preventing overfitting

See regularising deep learning.

Activations for neural networks

See activation functions


Various design niceties.

Managing those dimensions

Practically a lot of the time managing deep learning is remembering which axis is which.

Alexander Rush argues you want a NamedTensor. Implementations:

Einsum does Einstein summation, which is also very helpful.

Software stuff

For general purposes I use

I could use any of hte other autodiff systems, such as…

  • Intel’s ngraph, which compiles neural nets esp for CPUs

  • Collaboratively build, visualize, and design neural nets in browser

  • Python: Theano (now defunct) was the trailblazer

  • Lua: Torch (in practice deprectaed in favour of pytorch)

  • MATLAB/Python: Caffe claims to be a “de facto standard”

  • Python/C++: Paddlepaddle is Baidu’s nonfancy NN machine

  • Minimalist C++: tiny-dnn is a C++11 implementation of deep learning. It is suitable for deep learning on limited computational resource, embedded systems and IoT devices.

  • NNpack “is an acceleration package for neural network computations. NNPACK aims to provide high-performance implementations of convnet layers for multi-core CPUs.”

    NNPACK is not intended to be directly used by machine learning researchers; instead it provides low-level performance primitives to be leveraged by higher-level frameworks

    USP: compiles to javascript.

  • javascript: see javascript machine learning

  • julia. various

pre-computed/trained models


The internet is full of this. Here are some selected highlights.

Michael Nielson has a free online textbook with code examples in python. Christopher Olah’s visual explanations make many things clear.

Andrej’s popular unromantic messy guide to training neural nets in practice has a lot of tips that people tend to rediscover (I did)

It is allegedly easy to get started with training neural nets. Numerous libraries and frameworks take pride in displaying 30-line miracle snippets that solve your data problems, giving the (false) impression that this stuff is plug and play. … Unfortunately, neural nets are nothing like that. They are not “off-the-shelf” technology the second you deviate slightly from training an ImageNet classifier.

Amari, Shun-ichi. 1998. “Natural Gradient Works Efficiently in Learning.” Neural Computation 10 (2): 251–76.
Andrychowicz, Marcin, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. 2016. “Learning to Learn by Gradient Descent by Gradient Descent.” June 14, 2016.
Arel, I, D C Rose, and T P Karnowski. 2010. “Deep Machine Learning - A New Frontier in Artificial Intelligence Research [Research Frontier].” IEEE Computational Intelligence Magazine 5 (4): 13–18.
Arora, Sanjeev, Rong Ge, Tengyu Ma, and Ankur Moitra. 2015. “Simple, Efficient, and Neural Algorithms for Sparse Coding.” In Proceedings of The 28th Conference on Learning Theory, 40:113–49. Paris, France: PMLR.
Bach, Francis. 2014. “Breaking the Curse of Dimensionality with Convex Neural Networks.” December 30, 2014.
Baldassi, Carlo, Christian Borgs, Jennifer T. Chayes, Alessandro Ingrosso, Carlo Lucibello, Luca Saglietti, and Riccardo Zecchina. 2016. “Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes.” Proceedings of the National Academy of Sciences 113 (48): E7655–62.
Barron, A. R. 1993. “Universal Approximation Bounds for Superpositions of a Sigmoidal Function.” IEEE Transactions on Information Theory 39 (3): 930–45.
Baydin, Atılım Güneş, Barak A. Pearlmutter, and Jeffrey Mark Siskind. 2016. “Tricks from Deep Learning.” November 10, 2016.
Bengio, Yoshua. 2009. Learning Deep Architectures for AI. Vol. 2.
Bengio, Yoshua, Aaron Courville, and Pascal Vincent. 2013. “Representation Learning: A Review and New Perspectives.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35: 1798–828.
Bengio, Yoshua, and Yann LeCun. 2007. “Scaling Learning Algorithms Towards AI.” Large-Scale Kernel Machines 34: 1–41.
Bengio, Yoshua, Nicolas L. Roux, Pascal Vincent, Olivier Delalleau, and Patrice Marcotte. 2005. “Convex Neural Networks.” In Advances in Neural Information Processing Systems, 123–30.
Boser, B. 1991. “An Analog Neural Network Processor with Programmable Topology.” J. Solid State Circuits 26: 2017–25.
Brock, Andrew, Theodore Lim, J. M. Ritchie, and Nick Weston. 2017. FreezeOut: Accelerate Training by Progressively Freezing Layers.” June 15, 2017.
Cadieu, C. F. 2014. “Deep Neural Networks Rival the Representation of Primate It Cortex for Core Visual Object Recognition.” PLoS Comp. Biol. 10: e1003963.
Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. 2015. Net2Net: Accelerating Learning via Knowledge Transfer.” November 17, 2015.
Cho, Kyunghyun, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. “On the Properties of Neural Machine Translation: Encoder-Decoder Approaches.” 2014.
Choromanska, Anna, MIkael Henaff, Michael Mathieu, Gerard Ben Arous, and Yann LeCun. 2015. “The Loss Surfaces of Multilayer Networks.” In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, 192–204.
Ciodaro, T. 2012. “Online Particle Detection with Neural Networks Based on Topological Calorimetry Information.” J. Phys. Conf. Series 368: 012030.
Ciresan, D. 2012. “Multi-Column Deep Neural Network for Traffic Sign Classification.” Neural Networks 32: 333–38.
Cybenko, G. 1989. “Approximation by Superpositions of a Sigmoidal Function.” Mathematics of Control, Signals and Systems 2: 303–14.
Dahl, G. E. 2012. “Context-Dependent Pre-Trained Deep Neural Networks for Large Vocabulary Speech Recognition.” IEEE Transactions on Audio, Speech and Language Processing 20: 33–42.
Dauphin, Yann, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. 2014. “Identifying and Attacking the Saddle Point Problem in High-Dimensional Non-Convex Optimization.” In Advances in Neural Information Processing Systems 27, 2933–41. Curran Associates, Inc.
Dieleman, Sander, and Benjamin Schrauwen. 2014. “End to End Learning for Music Audio.” In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6964–68. IEEE.
Erhan, Dumitru, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. “Why Does Unsupervised Pre-Training Help Deep Learning?” Journal of Machine Learning Research 11: 625–60.
Farabet, C. 2013. “Learning Hierarchical Features for Scene Labeling.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35: 1915–29.
Fukumizu, K., and S. Amari. 2000. “Local Minima and Plateaus in Hierarchical Structures of Multilayer Perceptrons.” Neural Networks 13 (3): 317–27.
Fukushima, Kunihiko, and Sei Miyake. 1982. “Neocognitron: A New Algorithm for Pattern Recognition Tolerant of Deformations and Shifts in Position.” Pattern Recognition 15 (6): 455–69.
Gal, Yarin, and Zoubin Ghahramani. 2016. “A Theoretically Grounded Application of Dropout in Recurrent Neural Networks.” In.
Garcia, C. 2004. “Convolutional Face Finder: A Neural Architecture for Fast and Robust Face Detection.” IEEE Transactions on Pattern Analysis and Machine Intelligence 26: 1408–23.
Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. 2015. “A Neural Algorithm of Artistic Style.” August 26, 2015.
Giryes, R., G. Sapiro, and A. M. Bronstein. 2016. “Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?” IEEE Transactions on Signal Processing 64 (13): 3444–57.
Giryes, Raja, Guillermo Sapiro, and Alex M. Bronstein. 2014. “On the Stability of Deep Networks.” December 18, 2014.
Globerson, Amir, and Roi Livni. 2016. “Learning Infinite-Layer Networks: Beyond the Kernel Trick.” June 16, 2016.
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” December 19, 2014.
Goodfellow, Ian J., Oriol Vinyals, and Andrew M. Saxe. 2014. “Qualitatively Characterizing Neural Network Optimization Problems.” December 19, 2014.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc.
Hadsell, R., S. Chopra, and Y. LeCun. 2006. “Dimensionality Reduction by Learning an Invariant Mapping.” In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2:1735–42.
Hasson, Uri, Samuel A. Nastase, and Ariel Goldstein. 2020. “Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks.” Neuron 105 (3): 416–34.
He, Kun, Yan Wang, and John Hopcroft. 2016. “A Powerful Generative Model Using Random Weights for the Deep Image Representation.” In Advances in Neural Information Processing Systems.
Helmstaedter, M. 2013. “Connectomic Reconstruction of the Inner Plexiform Layer in the Mouse Retina.” Nature 500: 168–74.
Hinton, G., Li Deng, Dong Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, et al. 2012. “Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups.” IEEE Signal Processing Magazine 29 (6): 82–97.
Hinton, G. E. 1995. “The Wake-Sleep Algorithm for Unsupervised Neural Networks.” Science 268 (5214): 1558–1161.
Hinton, Geoffrey. 2010. “A Practical Guide to Training Restricted Boltzmann Machines.” In Neural Networks: Tricks of the Trade, 9:926. Lecture Notes in Computer Science 7700. Springer Berlin Heidelberg. hinton/absps/guideTR.pdf.
Hinton, Geoffrey E. 2007. “To Recognize Shapes, First Learn to Generate Images.” In Progress in Brain Research, edited by Trevor Drew and John F. Kalaska Paul Cisek, Volume 165:535–47. Computational Neuroscience: Theoretical Insights into Brain Function. Elsevier. hinton/absps/montrealTR.pdf.
Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. 2006. “Reducing the Dimensionality of Data with Neural Networks.” Science 313 (5786): 504–7.
Hinton, G, S Osindero, and Y Teh. 2006. “A Fast Learning Algorithm for Deep Belief Nets.” Neural Computation 18 (7): 1527–54.
Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. 1989. “Multilayer Feedforward Networks Are Universal Approximators.” Neural Networks 2 (5): 359–66.
Hu, Tao, Cengiz Pehlevan, and Dmitri B. Chklovskii. 2014. “A Hebbian/Anti-Hebbian Network for Online Sparse Dictionary Learning Derived from Symmetric Matrix Factorization.” In 2014 48th Asilomar Conference on Signals, Systems and Computers.
Huang, Guang-Bin, and Chee-Kheong Siew. 2005. “Extreme Learning Machine with Randomly Assigned RBF Kernels.” International Journal of Information Technology 11 (1): 16–24.
Huang, Guang-Bin, Dian Hui Wang, and Yuan Lan. 2011. “Extreme Learning Machines: A Survey.” International Journal of Machine Learning and Cybernetics 2 (2): 107–22.
Huang, Guang-Bin, Qin-Yu Zhu, and Chee-Kheong Siew. 2004. “Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks.” In 2004 IEEE International Joint Conference on Neural Networks, 2004. Proceedings, 2:985–990 vol.2.
———. 2006. “Extreme Learning Machine: Theory and Applications.” Neurocomputing, Neural Networks Selected Papers from the 7th Brazilian Symposium on Neural Networks (SBRN ’04) 7th Brazilian Symposium on Neural Networks, 70 (1–3): 489–501.
Hubel, D. H. 1962. “Receptive Fields, Binocular Interaction, and Functional Architecture in the Cat’s Visual Cortex.” J. Physiol. 160: 106–54.
Jaderberg, Max, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. 2016. “Decoupled Neural Interfaces Using Synthetic Gradients.” August 18, 2016.
Kaiser, Łukasz, and Ilya Sutskever. 2015. “Neural GPUs Learn Algorithms.” November 25, 2015.
Kalchbrenner, Nal, Ivo Danihelka, and Alex Graves. 2016. “Grid Long Short-Term Memory.” January 7, 2016.
Kavukcuoglu, Koray, Marc’Aurelio Ranzato, and Yann LeCun. 2010. “Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition.” October 17, 2010.
Kingma, Diederik P., Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. “Improving Variational Inference with Inverse Autoregressive Flow.” In Advances in Neural Information Processing Systems 29. Curran Associates, Inc.
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. “Imagenet Classification with Deep Convolutional Neural Networks.” In Advances in Neural Information Processing Systems, 1097–1105.
Kulkarni, Tejas D., Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. 2015. “Deep Convolutional Inverse Graphics Network.” March 11, 2015.
Larsen, Anders Boesen Lindbo, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2015. “Autoencoding Beyond Pixels Using a Learned Similarity Metric.” December 31, 2015.
Lawrence, S. 1997. “Face Recognition: A Convolutional Neural-Network Approach.” IEEE Transactions on Neural Networks 8: 98–113.
LeCun, Y. 1998. “Gradient-Based Learning Applied to Document Recognition.” Proceedings of the IEEE 86 (11): 2278–2324.
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. “Deep Learning.” Nature 521 (7553): 436–44.
LeCun, Yann, Sumit Chopra, Raia Hadsell, M. Ranzato, and F. Huang. 2006. “A Tutorial on Energy-Based Learning.” Predicting Structured Data.
Lee, Honglak, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. 2009. “Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations.” In Proceedings of the 26th Annual International Conference on Machine Learning, 609–16. ICML ’09. New York, NY, USA: ACM.
Lee, Wee Sun, Peter L. Bartlett, and Robert C. Williamson. 1996. “Efficient Agnostic Learning of Neural Networks with Bounded Fan-in.” IEEE Transactions on Information Theory 42 (6, 6): 2118–32.
Leung, M. K. 2014. “Deep Learning of the Tissue-Regulated Splicing Code.” Bioinformatics 30: i121–29.
Liang, Feynman, Marcin Tomczak, Matt Johnson, Mark Gotham, Jamie Shotten, and Bill Byrne. n.d. BachBot: Deep Generative Modeling of Bach Chorales,” 1.
Lin, Henry W., and Max Tegmark. 2016a. “Critical Behavior from Deep Dynamics: A Hidden Dimension in Natural Language.” June 21, 2016.
———. 2016b. “Why Does Deep and Cheap Learning Work so Well?” August 29, 2016.
Lipton, Zachary C. 2016a. “Stuck in a What? Adventures in Weight Space.” February 23, 2016.
———. 2016b. “The Mythos of Model Interpretability.” In.
Lipton, Zachary C., John Berkowitz, and Charles Elkan. 2015. “A Critical Review of Recurrent Neural Networks for Sequence Learning.” May 29, 2015.
Ma, J. 2015. “Deep Neural Nets as a Method for Quantitative Structure-Activity Relationships.” J. Chem. Inf. Model. 55: 263–74.
Maclaurin, Dougal, David K. Duvenaud, and Ryan P. Adams. 2015. “Gradient-Based Hyperparameter Optimization Through Reversible Learning.” In ICML, 2113–22.
Mallat, Stéphane. 2012. “Group Invariant Scattering.” Communications on Pure and Applied Mathematics 65 (10, 10): 1331–98.
———. 2016. “Understanding Deep Convolutional Networks.” January 19, 2016.
Mehta, Pankaj, and David J. Schwab. 2014. “An Exact Mapping Between the Variational Renormalization Group and Deep Learning.” October 14, 2014.
Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Efficient Estimation of Word Representations in Vector Space.” January 16, 2013.
Mikolov, Tomas, Quoc V. Le, and Ilya Sutskever. 2013. “Exploiting Similarities Among Languages for Machine Translation.” September 16, 2013.
Mnih, V. 2015. “Human-Level Control Through Deep Reinforcement Learning.” Nature 518: 529–33.
Mohamed, A. r, G. E. Dahl, and G. Hinton. 2012. “Acoustic Modeling Using Deep Belief Networks.” IEEE Transactions on Audio, Speech, and Language Processing 20 (1): 14–22.
Monner, Derek, and James A. Reggia. 2012. “A Generalized LSTM-Like Training Algorithm for Second-Order Recurrent Neural Networks.” Neural Networks 25 (January): 70–83.
Montufar, G. 2014. “When Does a Mixture of Products Contain a Product of Mixtures?” J. Discrete Math. 29: 321–47.
Mousavi, Ali, and Richard G. Baraniuk. 2017. “Learning to Invert: Signal Recovery via Deep Convolutional Networks.” In ICASSP.
Ning, F. 2005. “Toward Automatic Phenotyping of Developing Embryos from Videos.” IEEE Transactions on Image Processing 14: 1360–71.
Nøkland, Arild. 2016. “Direct Feedback Alignment Provides Learning in Deep Neural Networks.” In Advances In Neural Information Processing Systems.
Olshausen, B. A., and D. J. Field. 1996. “Natural Image Statistics and Efficient Coding.” Network (Bristol, England) 7 (2): 333–39.
Olshausen, Bruno A., and David J. Field. 1996. “Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images.” Nature 381 (6583): 607–9.
Olshausen, Bruno A, and David J Field. 2004. “Sparse Coding of Sensory Inputs.” Current Opinion in Neurobiology 14 (4): 481–87.
Oord, Aäron van den. 2016. “Wavenet: A Generative Model for Raw Audio.”
Oord, Aäron van den, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016. “Pixel Recurrent Neural Networks.” January 25, 2016.
Oord, Aäron van den, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. 2016. “Conditional Image Generation with PixelCNN Decoders.” June 16, 2016.
Pan, Wei, Hao Dong, and Yike Guo. 2016. DropNeuron: Simplifying the Structure of Deep Neural Networks.” June 23, 2016.
Parisotto, Emilio, and Ruslan Salakhutdinov. 2017. “Neural Map: Structured Memory for Deep Reinforcement Learning.” February 27, 2017.
Pascanu, Razvan, Yann N. Dauphin, Surya Ganguli, and Yoshua Bengio. 2014. “On the Saddle Point Problem for Non-Convex Optimization.” May 19, 2014.
Paul, Arnab, and Suresh Venkatasubramanian. 2014. “Why Does Deep Learning Work? - A Perspective from Group Theory.” December 20, 2014.
Pinkus, Allan. 1999. “Approximation Theory of the MLP Model in Neural Networks.” Acta Numerica 8 (January): 143–95.
Radford, Alec, Luke Metz, and Soumith Chintala. 2015. “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In.
Ranzato, M. 2013. “Modeling Natural Images Using Gated MRFs.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (9): 2206–22.
Ranzato, Marc’aurelio, Y.-lan Boureau, and Yann L. Cun. 2008. “Sparse Feature Learning for Deep Belief Networks.” In Advances in Neural Information Processing Systems 20, edited by J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, 1185–92. Curran Associates, Inc.
Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. 1986. “Learning Representations by Back-Propagating Errors.” Nature 323 (6088): 533–36.
Sagun, Levent, V. Ugur Guney, Gerard Ben Arous, and Yann LeCun. 2014. “Explorations on High Dimensional Landscapes.” December 20, 2014.
Salimans, Tim, and Diederik P Kingma. 2016. “Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 901–1. Curran Associates, Inc.
Scardapane, Simone, Danilo Comminiello, Amir Hussain, and Aurelio Uncini. 2016. “Group Sparse Regularization for Deep Neural Networks.” July 2, 2016.
Shazeer, Noam, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer.” January 23, 2017.
Shwartz-Ziv, Ravid, and Naftali Tishby. 2017. “Opening the Black Box of Deep Neural Networks via Information.” March 2, 2017.
Smith, Leslie N., and Nicholay Topin. 2017. “Exploring Loss Function Topology with Cyclical Learning Rates.” February 14, 2017.
Springenberg, Jost Tobias, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. 2014. “Striving for Simplicity: The All Convolutional Net.” In Proceedings of International Conference on Learning Representations (ICLR) 2015.
Steeg, Greg Ver, and Aram Galstyan. 2015. “The Information Sieve.” July 8, 2015.
Telgarsky, Matus. 2015. “Representation Benefits of Deep Feedforward Networks.” September 27, 2015.
Turaga, S. C. 2010. “Convolutional Networks Can Learn to Generate Affinity Graphs for Image Segmentation.” Neural Comput. 22: 511–38.
Urban, Gregor, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, Rich Caruana, Abdelrahman Mohamed, Matthai Philipose, and Matt Richardson. 2016. “Do Deep Convolutional Nets Really Need to Be Deep (Or Even Convolutional)?” March 17, 2016.
Wiatowski, Thomas, and Helmut Bölcskei. 2015. “A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction.” In Proceedings of IEEE International Symposium on Information Theory.
Wiatowski, Thomas, Philipp Grohs, and Helmut Bölcskei. 2018. “Energy Propagation in Deep Convolutional Neural Networks.” IEEE Transactions on Information Theory 64 (7): 1–1.
Xie, Bo, Yingyu Liang, and Le Song. 2016. “Diversity Leads to Generalization in Neural Networks.” November 9, 2016.
Yu, D., and L. Deng. 2011. “Deep Learning and Its Applications to Signal and Information Processing [Exploratory DSP].” IEEE Signal Processing Magazine 28 (1): 145–54.
Zhang, Chiyuan, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2017. “Understanding Deep Learning Requires Rethinking Generalization.” In Proceedings of ICLR.
Zhang, Sixin, Anna Choromanska, and Yann LeCun. 2015. “Deep Learning with Elastic Averaging SGD.” In Advances In Neural Information Processing Systems.