6:10 PM - 6:30 PM
[2K6-ES-2-02] Sparsity enforcement on latent variables for better disentanglement in VAE
A Study on the Latent Space of VAE by Inducing Sparsity in the Encoder Network
Keywords:disentangle latent representations, latent space, sparse representation, variational autoencoder.
We address the problem of unsupervised latent factorization and reconstruction accuracy. The related work on
unsupervised representations focuses on constraining the second term of Variational Autoencoders loss function:
The Kullback-Leibler component (Beta-VAE, FactorVAE Beta-TCVAE). Despite promising results, this comes with
a trade-off between disentanglement and reconstruction. Besides, it is not clear why minimizing the KL divergence
leads to disentanglement.
In this paper, we propose to achieve disentangled representations by sampling from a sparse distribution. To
give a visual appealing reconstruction for humans, we replace the conventional pixel-wise quadratic by perceptual
loss. We demonstrate the reconstruction quality and disentangled on synthetic datasets.
unsupervised representations focuses on constraining the second term of Variational Autoencoders loss function:
The Kullback-Leibler component (Beta-VAE, FactorVAE Beta-TCVAE). Despite promising results, this comes with
a trade-off between disentanglement and reconstruction. Besides, it is not clear why minimizing the KL divergence
leads to disentanglement.
In this paper, we propose to achieve disentangled representations by sampling from a sparse distribution. To
give a visual appealing reconstruction for humans, we replace the conventional pixel-wise quadratic by perceptual
loss. We demonstrate the reconstruction quality and disentangled on synthetic datasets.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.