Disclaimer: the code was written and tested in a rush. Please use it with caution.
Implementation of CSVAE from "Learning latent subspaces in variational autoencoders" (NIPS'18).
CSVAE is an autoencoder model based on plain VAE and is an implementation of the idea of mutual information minimization between a feature and a part of latent space to constrain the structure of the latent mapping for richer representations.
We implement Conditional Subspace VAE (CSVAE) architecture along with architectures of corresponding competing approaches (CondVAE, CondVAE-Info, VAE). We benchmark these architectures in a number of setups including reconstruction quality, latent space structuring capability and attribute transfer.
Project report is here. Light CSVAE models on toy data with epoch-by-epoch visualization are here.
csvae_toy/
contains scripts with CSVAE model definition and training procedure on toy data along with visualizations for easier interpretation and quantitative results. definitions.py
contains definitions for a number of models including CSVAE, train.py
is a parametrized script for easy training, csvae_toy.ipynb
is for playing around with representations and fast prototyping. Other notebooks are for CelebA training and evaluation.
condvae_celeba/
contains scripts for training VAE and Conditional VAE on CelebA dataset, the folder model/
contains architectures of VAE and Conditional VAE, and the folder weights/
contains the best weights, training details and dataframe with training log.
cvae_info/scripts
contains a script with for training CVAE_info on CelebA. Article
attr_classifier/
contains the notebook for training and testing the attribute classification model (ResNet34) and the weights for this model.
results/
contains notebooks with CelebA benchmark results: generated samples, metrics and examples of style transfer.
Contributors: Artem Shafarostov, Marina Pominova, Alexander Lyzhov, Elizaveta Lazareva