Variational Inference
Last updated
Last updated
The goal of variational inference is to approximate a conditional intensity of latent variables (I prefer the word hidden variable instead) given the observed variables. Instead of directly estimate the density, we would like to find its best approximation with the smallest KL divergence from a family of candidate densities.
Let be a set of observed variables and $ be a set of hidden variables, with a joint probability of
Inference problem is to compute the conditional density of the hidden variables given the observations, aka.
We can write the conditional density as
The denominator contains the marginal density of the observations, which is computed by integrating over the hidden variable from the joint density. We also call the evidence. In general, computing integral is hard. Now we introduce one practical example of the gaussian mixture model
Consider a mixture of unit-variance (variance equals 1) univariate (single variable) Gaussians. The means of i's Gaussian distribution is ,. Each mean parameter is sampled from a common prior distribution, which we assume . To generate an observation from the model, we first choose a cluster assignment (1 at the 's position) from a Categorical (uniform) distribution, which means that comes from mixture. We then draw from mixture,
The full model is
The joint density of hidden variable is
Instead of compute KL directly, we optimize an alternative objective that is equivalent to KL adding a constant
Thus the ELBO mirrors the usual balance between likelihood and prior.
Given the observed, our hidden variables are. Hence the evidence integral is
In variational inference, we specify a family of density over the hidden variables. Our goal is to find the best with the smallest KL divergence to the posterior density. Inference becomes a problem of optimization
is the best approximation of . Based on the definition of KL divergence
Because all expectation are taken with respect to , . So this KL divergence requires the computation of again, which is not trackable.
This function is called evidence lower bound (ELBO). It is clear that . Since is a constant with respect to , maximizing ELBO is equivalent to minimizing KL.
We rewrite the formula of ELBO as a sum of log likelihood of data and KL divergence between and
is the expected log-likelihood. It encourages the densities of hidden variables to explain the observed data.
is the the negative KL divergence between the variational density and the prior. It encourages the density close to the prior.