For questions about the Evidence Lower BOund (ELBO) objective function, which is typically optimized in the context of variational auto-encoders or variational Bayesian neural networks.
Questions tagged [evidence-lower-bound]
26 questions
6
votes
1 answer
Why is the evidence equal to the KL divergence plus the loss?
Why is the equation $$\log p_{\theta}(x^1,...,x^N)=D_{KL}(q_{\theta}(z|x^i)||p_{\phi}(z|x^i))+\mathbb{L}(\phi,\theta;x^i)$$ true, where $x^i$ are data points and $z$ are latent variables?
I was reading the original variation autoencoder paper and I…
user8714896
- 825
- 1
- 9
- 24
4
votes
1 answer
Why does the variational auto-encoder use the reconstruction loss?
VAE is trained to reduce the following two losses.
KL divergence between inferred latent distribution and Gaussian.
the reconstruction loss
I understand that the first one regularizes VAE to get structured latent space. But why and how does the…
Jun
- 89
- 5
4
votes
2 answers
What's going on in the equation of the variational lower bound?
I don't really understand what this equation is saying or what the purpose of the ELBO is. How does it help us find the true posterior distribution?
Gooby
- 351
- 3
- 12
3
votes
1 answer
Clarification on the training objective of denoising diffusion models
I'm reading the Denoising Diffusion Probabilistic Models paper (Ho et al. 2020). And I am puzzled about the training objective. I understood (I think) the trick regarding the reparametrization of the variance in terms of the noise:
$$\mu_\theta(x_t,…
user3903647
- 31
- 1
3
votes
1 answer
What does the approximate posterior on latent variables, $q_\phi(z|x)$, tend to when optimising VAE's
The ELBO objective is described as follows
$$ ELBO(\phi,\theta) = E_{q_\phi(z|x)}[log p_\theta (x|z)] - KL[q_\phi (z|x)||p(z)] $$
This form of ELBO includes a regularisation term in the form of the KL divergence which drives $q_\phi(z|x)…
quest ions
- 394
- 1
- 8
3
votes
2 answers
In variational autoencoders, why do people use MSE for the loss?
In VAEs, we try to maximize the ELBO = $\mathbb{E}_q [\log\ p(x|z)] + D_{KL}(q(z \mid x), p(z))$, but I see that many implement the first term as the MSE of the image and its reconstruction. Here's a paper (section 5) that seems to do that: Don't…
IttayD
- 229
- 2
- 5
2
votes
1 answer
Why is ELBO more tractible than computing the marginal likelihood in variational encoders
In the context of variational encoders, even after the model is trained, people seem to shy away from trying to compute
$$
p_{\theta}(x) = \int p_{\theta}(x|z)p(z)dz
$$
I understand that this can pose significant problems especially when z is high…
Sina
- 123
- 4
2
votes
1 answer
Question regarding the ELBO decomposition proposed by Hoffman&Johnson
recently I'm trying to read a paper by Hoffman and Johnson discussing an alternative decomposition of ELBO in variational autoencoders. In formula (9) and (10) of their original paper, they proposed some pre-requisite assumptions as…
Izzy Tse
- 23
- 3
2
votes
1 answer
Deriving ELBO for Diffusion Models
I am trying to read through the proof of ELBO for diffusion models on pg. 8 of this paper. However, I do not see how the author arrived at Eqn (45) from Eqn (44). Specifically, I do not know how they simplified the equation by rewriting it in terms…
Nikhil Sridhar
- 23
- 2
2
votes
1 answer
Derivation of the consistency term in the DDPM Evidence Lower Bound (ELBO)
I have been studying diffusion models from this tutorial: https://arxiv.org/abs/2403.18103 and trying to derive all results as I read it. Although this tutorial is very comprehensive, it skips many of the derivation steps. I am currently stuck in…
ahxmeds
- 31
- 2
2
votes
2 answers
What is the meaning of log p(x) in VAE math and why is it constant
I was reading the article on medium, where the author cites this equation for Variational Inference:
\begin{align*}
\text{KL}(q(z|x^{(i)})||p(z|x^{(i)})) &= \int_z q(z|x^{(i)})\text{log}\frac{q(z|x^{(i)})}{p(z|x^{(i)})} dz \\
&=…
Kiran Manicka
- 113
- 6
2
votes
1 answer
How does the implementation of the VAE's objective function equate to ELBO?
For a lot of VAE implementations I've seen in code, it's not really obvious to me how it equates to ELBO.
$$L(X)=H(Q)-H(Q:P(X,Z))=\sum_ZQ(Z)logP(Z,X)-\sum_ZQ(Z)log(Q(Z))$$
The above is the definition of ELBO, where $X$ is some input, $Z$ is a latent…
user8714896
- 825
- 1
- 9
- 24
2
votes
1 answer
In this VAE formula, why do $p$ and $q$ have the same parameters?
In $$\log p_{\theta}(x^1,...,x^N)=D_{KL}(q_{\theta}(z|x^i)||p_{\phi}(z|x^i))+\mathbb{L}(\phi,\theta;x^i),$$ why does $p(x^1,...,x^N)$ and $q(z|x^i)$ have the same parameter $\theta?$
Given that $p$ is just the probability of the observed data and…
user8714896
- 825
- 1
- 9
- 24
2
votes
0 answers
Why does the ELBO come to a steady state and the latent space shrinks?
I'm trying to train a VAE using a graph dataset. However, my latent space shrinks epoch by epoch. Meanwhile, my ELBO plot comes to a steady state after a few epochs.
I tried to play around with parameters and I realized, by increasing the batch size…
Blade
- 151
- 1
- 6
1
vote
1 answer
Why are these 2 terms canceled out in the ELBO derivation for diffusion models?
I was looking at the derivation of the ELBO for diffusion models from this website and I'm a little bit confused as to why they cancelled these two terms out.
henrysilver
- 13
- 2