Placeholder for a concept that has cropped up a few times in my conversations of late — informally, annealing methods are ones in which we think about changing the “temperature” of the system whose energy is given by a certain (log-)probability density, which ends up being the same thing as raising the density to a power, or multiplying the log density by something. Other related concepts include cooling densities, tempering, Platt scaling, fractional densities, cold posteriors and some other stuff.
Call the
Not sure of the origin point of the annealing concept, but maybe Gelfand and Mitter (1990) for an early introduction with practical application.
1 As data weighting
If we do not wish to accumulate “all the information” in a data point, but would rather prefer to “weight” it somehow (“Let’s only be 60% as influenced by this datum as we might naively be”) then one natural interpretation of the weighting would be as a tempering, i.e. using the
2 In Gibbs posteriors
Tempering seems to arise naturally in the Gibbs posterior framework.
3 “Cold” posteriors
If
Wenzel et al. (2020) argue, in the context of Bayesian NNs:
…[W]e demonstrate that predictive performance is improved significantly through the use of a “cold posterior” that overcounts evidence. Such cold posteriors sharply deviate from the Bayesian paradigm but are commonly used as heuristic in Bayesian deep learning papers. We put forward several hypotheses that could explain cold posteriors and evaluate the hypotheses through experiments.
Much debate was sparked. See Aitchison (2020), Adlam, Snoek, and Smith (2020), Noci et al. (2021), Izmailov et al. (2021). They also draw a parallel to Masegosa (2020) which looks somewhat interesting.
Aitchison (2020) introduces the machinery:
Tempered (e.g. Zhang et al. 2018) and cold (Wenzel et al. 2020) posteriors differ slightly in how they apply the temperature parameter. For cold posteriors, we scale the whole posterior, whereas tempering is a method typically applied in variational inference, and corresponds to scaling the likelihood but not the prior,
While cold posteriors are typically used in SGLD, tempered posteriors are usually targeted by variational methods. In particular, variational methods apply temperature scaling to the KL-divergence between the approximate posterior, and prior, Note that the only difference between cold and tempered posteriors is whether we scale the prior, and if we have Gaussian priors over the parameters (the usual case in Bayesian neural networks), this scaling can be absorbed into the prior variance, in which case, , so the tempered posteriors we discuss are equivalent to cold posteriors with rescaled prior variances.
4 Examples for particular likelihoods
4.1 Gaussian
For a multivariate Gaussian distribution in canonical (information) form, the density is expressed as
where
is the precision matrix (the inverse of the covariance matrix , i.e., ), is the information vector, is the mean vector.
When we temper this density by
In the moments form, tempering a multivariate Gaussian distribution by a scalar
- Unchanged Mean:
- Scaled Covariance: