The goal of variational inference is to approximate a conditional intensity of latent variables (I prefer the word hidden variable instead) given the observed variables. Instead of directly estimate the density, we would like to find its best approximation with the smallest KL divergence from a family of candidate densitiesL.
Problem of approximate inference
Letx={xi}i=1n be a set of observed variables and $ z={zi}i=1n be a set of hidden variables, with a joint probability of p(x,z)
Inference problem is to compute the conditional density of the hidden variables given the observations, aka. p(z∣x)
We can write the conditional density as
p(z∣x)=p(x)p(z,x)=∫p(z,x)dxp(z,x) The denominator contains the marginal density p(x)of the observations, which is computed by integrating over the hidden variable from the joint density. We also call p(x) the evidence. In general, computing integral is hard. Now we introduce one practical example of the gaussian mixture model
Bayesian mixture of Gaussians
Consider a mixture of K unit-variance (variance equals 1) univariate (single variable) Gaussians. The means of i's Gaussian distribution is μi,μ={μ1,…,μK}. Each mean parameter is sampled from a common prior distributionp(μ), which we assume p(μ)=N(0,σ2). To generate an observation xi from the model, we first choose a cluster assignment ci=[0,…,1,…,0] (1 at theci 's position) from a Categorical (uniform) distribution, which means that xicomes from mixtureci. We then draw xi from mixtureci,xi∼N(ci⊤μ,1)
The full model is
μi∼N(0,σ2),i=1,…,Kci∼Categorical(K1,…,K1),i=1,…,np(xi∣ci,μ)=N(ci⊤μ,1) The joint density of hidden variable is
p(μ,c,x)=i=1∏np(xi,ci,μ)=i=1∏np(xi∣ci,μ)p(ci,μ)=i=1∏np(xi∣ci,μ)p(ci)p(μ)=p(μ)i=1∏np(xi∣ci,μ)p(ci) Given the observedx, our hidden variables arez={μ,c}. Hence the evidence integral is
p(x)=ci∑∫p(μ)i=1∏np(xi∣ci,μ)p(ci)dμ=ci∑p(ci)∫p(μ)i=1∏np(xi∣ci,μ)dμ The evidence lower bound (ELBO)
In variational inference, we specify a family L of density over the hidden variables. Our goal is to find the best q(z)∈L with the smallest KL divergence to the posterior density. Inference becomes a problem of optimization
q∗(z)=argq(z)∈LminKL(q(z)∣∣p(z∣x)) q∗(z) is the best approximation of p(z∣x) . Based on the definition of KL divergence
KL(q(z)∣∣p(z∣x))=E[logq(z)]−E[logp(z∣x)]=E[logq(z)]−E[logp(x)p(z,x)]=E[logq(z)]−E[logp(z,x)]+E[logp(x)] Because all expectation are taken with respect to z , E[logp(x)]=logp(x). So this KL divergence requires the computation of p(x) again, which is not trackable.
Instead of compute KL directly, we optimize an alternative objective that is equivalent to KL adding a constant
ELBO(q)=E[logp(z,x)]−E[logq(z)] This function is called evidence lower bound (ELBO). It is clear that ELBO=−KL+logp(x) . Since logp(x) is a constant with respect to q(z) , maximizing ELBO is equivalent to minimizing KL.
We rewrite the formula of ELBO as a sum of log likelihood of data and KL divergence between q(z)and p(z)
ELBO(q)=E[logp(z,x)]−E[logq(z)]=E[logp(x∣z)p(z)]−E[logq(z)]=E[logp(x∣z)]+E[logp(z)]−E[logq(z)]=E[logp(x∣z)]−KL[q(z)∣∣p(z)] What does ELBO mean?
E[logp(x∣z)]is the expected log-likelihood. It encourages the densities of hidden variables to explain the observed data.
−KL[q(z)∣∣p(z)] is the the negative KL divergence between the variational density and the prior. It encourages the density close to the prior.
Thus the ELBO mirrors the usual balance between likelihood and prior.