~The tempered posterior
via the principle of maximum entropy
Most of the results in singular learning theory are developed in the context of a generalised form of Bayesian inference, in which the posterior distribution has an additional (inverse) temperature parameter. In this note, I introduce this so-called “tempered posterior” and contrast it with the usual Bayesian posterior. I also discuss various motivations for departing from the Bayesian suggestion, including a derivation of the tempered posterior using the principle of maximum entropy.
Thanks to Edmund Lau, Daniel Murfet, and Susan Wei for helpful conversations.
Contents:
- Inference, two ways
- Interpreting inverse temperature
- Motivating the tempered posterior
- Temperature as a constraint
- Conclusion
§Inference, two ways
The following describes an inference problem.
We begin with a family of statistical models, , with distributions . Suppose one model, , is specially designated as a “true model,” but we don’t know which one. We are given some prior belief distribution, , representing how much credence we should initially have in each model in the family being the true model.
Suppose further we are given a data set, , where each sample is drawn independently from the true distribution . How should we update our prior belief distribution, , to form a posterior, , so as to incorporate the knowledge we have gained about by seeing these samples from ?
Let’s discuss two answers.
§§The Bayesian posterior
The Bayesian posterior is one classical answer to the question of how to update our beliefs in the face of new samples. First, treat the true model as a latent random variable distributed according to the prior The true model is drawn and fixed before any samples are drawn according to the conditional probability distribution given by each model,
We can then use as our posterior the converse conditional probability distribution . The laws of probability, in particular Bayes’ rule, tells us how to define this conditional probability in terms of the other values we know: The resulting posterior is called the Bayesian posterior.
§§The general/tempered posterior
In singular learning theory, we often have cause to consider instead the so-called general posterior at inverse temperature , also called the tempered posterior, which is given by a different update rule:
The tempered posterior differs from the Bayesian posterior by the inclusion of the inverse temperature parameter .
In the remainder of this note, we’ll explore the interpretation of and different motivations and derivations of this update rule.
§Interpreting inverse temperature
For different values of , we recover some interesting forms of inference, which reveals the role of the inverse temperature parameter as controlling the “strength” of the update, in a similar way to the number of samples.
Of course, at inverse temperature , the update rule reduces to Bayes rule, and we recover Bayesian inference. But there are also other interpretable examples.
For example, at inverse temperature (infinite “temperature”), the tempered posterior reduces to the prior:
More generally, if for some non-negative integer , is the Bayesian posterior we would get if each data point had been independently sampled times, since the product in the likelihood can be rearranged as follows:
As inverse temperature is taken to infinity (approaching zero “temperature”), the tempered posterior concentrates around (that is, maximum likelihood models): let and . Observe: If , then as , and the numerator vanishes. Otherwise (that is, if ), and the numerator goes to . (The limit of the integral in the denominator depends on the number of likelihood maximisers and their local geometry.)
The above gives rise to the interpretation of the inverse temperature parameter as describing how dramatically to update from the prior based on the data, from “do not update at all” () to “update so as to only believe in the maximum likelihood models” ().
§Motivating the tempered posterior
You may be wondering, why should we consider an update rule other than Bayesian updating? What use are tempered posteriors with inverse temperatures other than ? Doesn’t this fly in the face of all of the philosophical arguments in favour of using the laws of probability to govern our beliefs?!
Let me discuss a few motivations for why you might want to include an inverse temperature parameter in your inference (and set it to something other than in some cases).
§§Temperature and model misspecification
First, a pragmatic justification from the perspective of a practitioner. Mind that the aforementioned philosophical arguments in favour of Bayesian inference involve assumptions. For example, they assume that the statement of the inference problem above is an accurate description of the situation we face. What if, in practice, our samples are not drawn i.i.d. from ?
The samples could be drawn from , but non-independently. This would lead to us not having a full samples worth of information about the true model in our data set of purported size . In the limit, if we had a single sample repeated times, the appropriate Bayesian update to make is a tempered update with .
The samples could be drawn independently, but not purely from . For example, they could have noise from an unrelated distribution mixed in, again leading to a dilution in the information about the true model in our sample. In the limit, if we see a sample drawn from a completely unrelated generative process, we should not update our beliefs about at all, that is, we should perform a tempered update with .
We see that in practical cases falling outside of the initial assumptions of our inference problem setting, the appropriate (even the Bayesian) thing to do might be to perform a “non-Bayesian,” fractional update on the given data.
If you have a real-world inference problem, perhaps you can even think of as a tunable hyper-parameter, which you can set based on what seems to get good performance or efficiency in your specific class of problems.
§§Temperature as a continuous variable
Second, a pragmatic justification from the perspective of a theoretician. Generalising the posterior with a new continuous temperature parameter can help to enable new kinds of mathematical analysis.
For example, the number of samples is a discrete variable, so you can’t differentiate with respect to . However, is continuous, and plays a similar role to (as discussed above). Therefore, you can achieve a similar effect to differentiating by by instead differentiating by .
More generally, mathematicians sometimes find that “generalising” a mathematical object can sometimes lead to a clearer understanding of the original object, within the broader, generalized context.
§§A connection to statistical mechanics
A similar (inverse) temperature parameter arises naturally in the context of statistical mechanics.
There, one studies an object similar to the tempered posterior, called the Boltzmann distribution (or Gibbs distribution), let’s denote it . The Boltzmann distribution describes the distribution of system microstates (possible configurations) in terms of their energy levels (given by a function ). In the discrete case, here is the formula: Here, is the system temperature ( is the inverse temperature), and is the Boltzmann constant (which allows for a conversion between units). (For the machine-learning minded, the Boltzmann distribution is a softmax over microstate energies scaled by temperature and the Boltzmann constant.)
We can compare this to the tempered posterior equation, which is, again: To connect these two distributions, first define an energy function as follows: The first term is the negative log likelihood of the data set. The second is a background potential given by the negative log density of the prior, scaled by temperature. This energy function is designed such that , and so we can rewrite the tempered posterior in terms of energy as This closely matches the form of the Boltzmann distribution , revealing the motivation for calling a temperature parameter.
(Note that is missing from this story. In the statistical mechanics case it only plays the role of converting between different units used for temperature and energy, whereas in the Bayesian case our quantities are dimensionless.)
§Temperature as a constraint
The above connection between statistical mechanics and inference is a perfect segue to the following, final attempt to motivate the tempered posterior. A similar analogy between statistical mechanics and Shannon’s information theory led Jaynes to develop the principle of maximum entropy, which suggests to choose a posterior probability distribution by maximising information entropy subject to a constraint that the distribution is a good fit for the observed data.
In this section, we will derive the family of tempered posterior distributions with varying inverse temperature as the solutions suggested by the principle of maximum entropy with varying strengths for the requirement that the distribution is a good fit for the data.
§§The principle of maximum entropy
The principle of maximum entropy suggests that in the context of an inference problem, the distribution we should use to represent our updated beliefs is that one that satisfies the following:
The chosen distribution should be consistent with the data. Formally, let’s say we want whatever distribution we select to be one under which the particular data we saw is highly plausible, in the sense that, in expectation over the distribution, the log likelihood of the data is sufficiently high.
The chosen distribution should otherwise be maximally non-committal as to which of the models in the model class explains the data. Formally, let’s say that among all distributions that attribute a high expected likelihood to the data, the chosen distribution should be the one among them with maximal entropy.
Like the principle of Bayesian inference, the principle of maximum entropy is an attempt at making a defensible choice for how to solve an inference problem. In particular, this principle can be defended on the grounds that maximising entropy amounts to making no unnecessary assumptions beyond what is required to be consistent with the data.
§§The constrained optimisation problem
Formally, we can view condition (2) as an optimisation objective and condition (1) as a constraint. Therefore, we can formalise the choice of posterior as a constrained optimisation problem over the space of probability distributions . Actually, it’s simpler to optimise over the space of all functions, and enforce normalisation via another constraint, as follows:
Choose so as to maximise the quantity subject to the constraints for some hyper-parameter .
Let’s break down each element of this problem statement:
First, the objective. If , then is the continuous entropy of with the prior distribution as the invariant measure. This is a generalisation to the continuous case of the more familiar formula for the entropy of a discrete distribution, Thus, this objective captures our objective of maximising the distribution’s information entropy.
By maximising , we are effectively minimising Kullback–Leibler divergence from the prior: We can therefore view the principle of maximum entropy as equivalent to finding the distribution that is consistent with the data while remaining as close as possible to the prior, in this sense “updating as little as is required to account for the data.”
Next, the likelihood constraint. Observe that the RHS is the expected log likelihood under : That is, this term measures how likely we expect this data set to be under a given distribution .
By setting the hyperparameter , we calibrate how likely we want to expect the data to have been after updating. If we set a very high , we restrict to distributions that concentrate most of their mass on models that find the data very likely. If we set a low , we’ll consider distributions that are less concentrated. As we will show below, the value of is going to end up determining the temperature of the resulting tempered posterior.
Finally, the normalisation constraint. This is a simple constraint that ensures that is not just any function, but one that satisfies the normalisation requirement of a probability distribution.
To ensure that , we should also enforce that is non-negative. However, this turns out to be a non-binding constraint for this optimisation problem. We can verify later that the solution we find given the other constraints already turns out to be non-negative.
Now that we have formulated the problem, let’s solve it!
§§Solving for the maximum entropy posterior
The first step for solving this constrained optimisation problem is to convert it to an unconstrained optimisation problem. Formulate a Lagrangian using Lagrange multipliers and to transform the constraints:
Simplify the Lagrangian as follows:
To find the optimal , first take the functional derivative. For , When solves our constrained optimisation problem we must have that , so we can solve for as follows:
It is clear that this is non-negative since , , and are. To make sure that is a distribution, we apply our normalisation constraint . Solving for leads predictably to Thus we have .
As for the value of , this will be determined by our hyper-parameter through the likelihood constraint The exact relationship between and depends on the details of the inference problem, and may in general be hard to state explicitly. However, we can already see from the final form of that plays the role of inverse temperature ! Therefore, we should expect to range from towards as varies from the expected likelihood under the prior to the maximum likelihood realisable in this model class.
§Conclusion
We have shown that the tempered posterior arises as the posterior belief distribution chosen according to the principle of maximum entropy. The inverse temperature parameter specifies exactly how likely we want to have expected our data to be after our update. Among the other motivations we have considered, I hope this deepens our understanding of the tempered posterior and the inverse temperature parameter.