Skip to content

[ICLR 2022] code for "Towards building a group-based unsupervised representation disentanglement framework"

License

Notifications You must be signed in to change notification settings

ThomasMrY/Groupified-VAE

Repository files navigation

TOWARDS BUILDING A GROUP-BASED UNSUPERVISED REPRESENTATION DISENTANGLEMENT FRAMEWORK

TOWARDS BUILDING A GROUP-BASED UNSUPERVISED REPRESENTATION DISENTANGLEMENT FRAMEWORK
Tao Yang, Xuanchi Ren, Yuwang Wang, Wenjun Zeng, Nanning Zheng
arXiv preprint arXiv:2102.10303
ICLR 2022

[Paper] [Arxiv] [Appendix]

Update:

🔲 Release code

Discription:

In this repo, built on the group-based definition and inspired by the n-th dihedral group, we first propose a theoretical framework towards achieving unsupervised representation disentanglement. We then propose a model, based on existing VAEbased methods, to tackle the unsupervised learning problem of the framework.The overview of our method is as follows:

image

Implementation details

  • The Isomorphism Loss in our paper is implemented in "model/share.py"
  • The model constraint in our paper is implemented in the function "forward" in "model/XXX_VAE.py"

Requirements

  • build a docker environment by using "./dockerfile"

Models

The VAE-based models:

Datasets

The evaluation datasets:

Download them to "./dataset_folder/"

Usage

The Arguements

usage: main.py [-h] --config_num CONFIG_NUM [--eval]

optional arguments:
-h, --help            show this help message and exit
--config_num CONFIG_NUM
                        the number of settings of hyperparameters and random
                        seeds
--eval                eval model or not (default: False)

Config File

The hyperparamter and random seed settings are numbered in './config.csv'. Every row in './config.csv' correspond to a setting.

Examples:

Training

Train the model under the 1000th setting:

python main.py --config_num = 1000

Evaluation

Evaluate the model we trained above:

python main.py --config_num = 1000 --eval

Qualitative evaluation

dSprites
BetaVAE DCI
image image
MIG FactorVAE
image image

NOTE: Groupified VAE achieves better performance mean with lower variance.

Latent space visualization

dSprites AnnealVAE
C_max = 10, Original C_max = 20, Original
image image
C_max = 10, Groupified C_max = 20, Groupified
image image

Qualitative results

Cars3d and Shapes3d
Original Groupified
image image
Cyclic latent space
Original image
Groupified image

Controllable meaningful dimensions in Groupified VAEs

dSprites Anneal VAE
KL divergence of dimensions Traversal results
image

Downstream Task Performance

image

Citation

@article{Tao2022groupified,
  title   = {TOWARDS BUILDING A GROUP-BASED UNSUPERVISED REPRESENTATION DISENTANGLEMENT FRAMEWORK},
  author  = {Tao, Yang and Xuanchi, Ren and Yuwang, Wang and Wenjun, Zeng and Nanning, Zheng and Pengju,Ren},
  journal = {ICLR},
  year    = {2022}
}

About

[ICLR 2022] code for "Towards building a group-based unsupervised representation disentanglement framework"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published