Neural network activation functions


There is a whole cottage industry in showing neural networks are reasonably universal function approximators with fairly general nonlinearities as activations, under fairly general conditions. Nonetheless, you might like to play with the precise form of the nonlinearities, even making them themselves directly learnable, because some function shapes might have better approximation properties with respect to various assumptions on the learning problems, in a sense which I will not attempt to make rigorous now, vague hand-waving arguments being the whole point of deep learning.

I think a large part of this field has been subsumed into the stability-of-dynamical-systems setting?

Nonetheless, here are a some handy references.

Agostinelli, Forest, Matthew Hoffman, Peter Sadowski, and Pierre Baldi. 2015. “Learning Activation Functions to Improve Deep Neural Networks.” In Proceedings of International Conference on Learning Representations (ICLR) 2015. http://arxiv.org/abs/1412.6830.

Anil, Cem, James Lucas, and Roger Grosse. 2018. “Sorting Out Lipschitz Function Approximation,” November. https://arxiv.org/abs/1811.05381v1.

Arjovsky, Martin, Amar Shah, and Yoshua Bengio. 2016. “Unitary Evolution Recurrent Neural Networks.” In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, 1120–8. ICML’16. New York, NY, USA: JMLR.org. http://arxiv.org/abs/1511.06464.

Balduzzi, David, Marcus Frean, Lennox Leary, J. P. Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. 2017. “The Shattered Gradients Problem: If Resnets Are the Answer, Then What Is the Question?” In PMLR, 342–50. http://proceedings.mlr.press/v70/balduzzi17b.html.

Clevert, Djork-Arné, Thomas Unterthiner, and Sepp Hochreiter. 2016. “Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs).” In Proceedings of ICLR. http://arxiv.org/abs/1511.07289.

Glorot, Xavier, and Yoshua Bengio. 2010. “Understanding the Difficulty of Training Deep Feedforward Neural Networks.” In Aistats, 9:249–56. http://www.jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf?hc_location=ufi.

Glorot, Xavier, Antoine Bordes, and Yoshua Bengio. 2011. “Deep Sparse Rectifier Neural Networks.” In Aistats, 15:275. http://www.jmlr.org/proceedings/papers/v15/glorot11a/glorot11a.pdf.

Goodfellow, Ian J., David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. 2013. “Maxout Networks.” In ICML (3), 28:1319–27. http://arxiv.org/abs/1302.4389.

He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015a. “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” February. http://arxiv.org/abs/1502.01852.

———. 2015b. “Deep Residual Learning for Image Recognition.” http://arxiv.org/abs/1512.03385.

———. 2016. “Identity Mappings in Deep Residual Networks.” In. http://arxiv.org/abs/1603.05027.

Hochreiter, Sepp. 1998. “The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions.” International Journal of Uncertainty Fuzziness and Knowledge Based Systems 6: 107–15. http://www.worldscientific.com/doi/abs/10.1142/S0218488598000094.

Hochreiter, Sepp, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. 2001. “Gradient Flow in Recurrent Nets: The Difficulty of Learning Long-Term Dependencies.” In A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press. http://www.bioinf.jku.at/publications/older/ch7.pdf.

Klambauer, Günter, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. “Self-Normalizing Neural Networks,” June. http://arxiv.org/abs/1706.02515.

Maas, Andrew L., Awni Y. Hannun, and Andrew Y. Ng. 2013. “Rectifier Nonlinearities Improve Neural Network Acoustic Models.” In Proceedings of ICML. Vol. 30. https://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf.

Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. 2013. “On the Difficulty of Training Recurrent Neural Networks.” In, 1310–8. http://arxiv.org/abs/1211.5063.

Srivastava, Rupesh Kumar, Klaus Greff, and Jürgen Schmidhuber. 2015. “Highway Networks.” In. http://arxiv.org/abs/1505.00387.

Wisdom, Scott, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. 2016. “Full-Capacity Unitary Recurrent Neural Networks.” In Advances in Neural Information Processing Systems, 4880–8. http://papers.nips.cc/paper/6327-full-capacity-unitary-recurrent-neural-networks.