Regularising neural networks
Generalisation for street fighters
February 12, 2017 — September 24, 2021
TBD: I have not examined this stuff for a long time and it is probably out of date.
How do we get generalisation from neural networks? As in all ML, it is probably about controlling overfitting to the training set by some kind of regularization.
1 Early stopping
e.g. (Prechelt 2012). Don’t keep training your model. The regularization method actually makes learning go faster because you don’t bother to do as much of it. Interesting connection to NN at scale
2 Stochastic weight averaging
Izmailov et al. (2018) Pytorch’s introduction to Stochastic Weight Averaging has all the diagrams and references we could want. Also, this ends up having some interesting connection to Bayesian posterior uncertainty.
3 Noise layers
See NN ensembles,
3.1 Input perturbation
Parametric noise applied to the data.
4 Weight penalties
\(L_1\), \(L_2\), dropout… Seems to be applied to weights, but rarely to actual neurons.
See Compressing neural networks for that latter use.
This is attractive but has a potentially expensive hyperparameter to choose. Also, should we penalize each weight equally, or are there some expedient normalization schemes? For that, see the next section:
5 Normalization
Mario Lezcano, in the PyTorch Tutorials mentions
Regularising deep-learning models is a surprisingly challenging task. Classical techniques such as penalty methods often fall short when applied to deep models due to the complexity of the function being optimized. This is particularly problematic when working with ill-conditioned models. Examples of these are RNNs trained on long sequences and GANs. A number of techniques have been proposed in recent years to regularize these models and improve their convergence. On recurrent models, it has been proposed to control the singular values of the recurrent kernel for the RNN to be well-conditioned. This can be achieved, for example, by making the recurrent kernel orthogonal. Another way to regularize recurrent models is via “weight normalization”. This approach proposes to decouple the learning of the parameters from the learning of their norms. To do so, the parameter is divided by its Frobenius norm and a separate parameter encoding its norm is learnt. A similar regularization was proposed for GANs under the name of “spectral normalization”. This method controls the Lipschitz constant of the network by dividing its parameters by their spectral norm, rather than their Frobenius norm.
5.1 Weight Normalization
Pragmatically, controlling for variability in your data can be very hard in, e.g. deep learning, so you might normalize it by the batch variance. Salimans and Kingma (Salimans and Kingma 2016) have a more satisfying approach to this.
We present weight normalization: a reparameterisation of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterising the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterisation is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time.
They provide an open implementation for keras, Tensorflow and lasagne.
5.2 Adversarial training
See GANS for one type of this.