Gumbel (soft) max tricks

Concrete distribution, relaxed categorical etc

February 20, 2017 — April 1, 2022

classification
metrics
probability
statistics
Figure 1

The family of Gumbel tricks is useful for sampling from things that look like categorical distributions and simplices and learning models which use categorical variables by reparameterisation.

1 Gumbel Trick basic

2 Softmax relaxation

A.k.a. relaxed Bernoulli, relaxed categorical.

One of the co-inventors, Eric Jang, wrote a tutorial Categorical Variational Autoencoders using Gumbel-Softmax:

The main contribution of this work is a “reparameterization trick” for the categorical distribution. Well, not quite—it’s actually a re-parameterization trick for a distribution that we can smoothly deform into the categorical distribution. We use the Gumbel-Max trick, which provides an efficient way to draw samples \(z\) from the Categorical distribution with class probabilities \(\pi_{i}\) : \[ z=\text { one_hot }\left(\underset{i}{\arg \max }\left[g_{i}+\log \pi_{i}\right]\right) \] argmax is not differentiable, so we simply use the softmax function as a continuous approximation of argmax: \[ y_{i}=\frac{\exp \left(\left(\log \left(\pi_{i}\right)+g_{i}\right) / \tau\right)}{\sum_{j=1}^{k} \exp \left(\left(\log \left(\pi_{j}\right)+g_{j}\right) / \tau\right)} \quad \text { for } i=1, \ldots, k \] Hence, we call this the “Gumbel-SoftMax distribution”. \(\tau\) is a temperature parameter that allows us to control how closely samples from the Gumbel-Softmax distribution approximate those from the categorical distribution. As \(\tau \rightarrow 0\), the softmax becomes an argmax and the Gumbel-Softmax distribution becomes the categorical distribution. During training, we let \(\tau>0\) to allow gradients past the sample, then gradually anneal the temperature \(\tau\) (but not completely to 0, as the gradients would blow up).

Emma Benjaminson, The Gumbel-Softmax Distribution takes it in small pedagogic steps.

3 Straight-through Gumbel

TBC

4 References

Huijben, Kool, Paulus, et al. 2022. A Review of the Gumbel-Max Trick and Its Extensions for Discrete Stochasticity in Machine Learning.” arXiv:2110.01515 [Cs, Stat].
Jang, Gu, and Poole. 2017. Categorical Reparameterization with Gumbel-Softmax.” arXiv:1611.01144 [Cs, Stat].
Maddison, Mnih, and Teh. 2017. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables.”
Papandreou, and Yuille. 2011. Perturb-and-MAP Random Fields: Using Discrete Optimization to Learn and Sample from Energy Models.” In 2011 International Conference on Computer Vision.
Potapczynski, Loaiza-Ganem, and Cunningham. 2020. Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax.” In Advances in Neural Information Processing Systems.
Wang, and Yin. 2020. Relaxed Multivariate Bernoulli Distribution and Its Applications to Deep Generative Models.” In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI).