Joint Reconstruction and Low-dimesional Latent Space Classification
WordNet's Perl Similarity Metric (see wordNet.md)
PyTorch 1.1.0 (* May require nightly build... Pending on ongoing development)
To abstract environment issues away Docker and it's gpu equivalent Nvidia Docker is used.
Download the Docker image: docker pull chapchaebytes/semantic_mtl
Build a Docker container with the required environment, GPU support and access to all of our files. This only needs to be ran once if a container does not already exist.
(sudo) nvidia-docker run -it -v {host path}:{docker custom path} -w {repeat custom path} --name "{insert name here}" chapchaebytes/semantic_mtl bash
One or two libraries and packages will still need to be installed but is quite trivial
Replace {name/id of container}
with pix2pixTF
or whatever you have named your container
Once you've built the container and wish to continue working:
(sudo) nvidia-docker start {name/id of container}
(sudo) nvidia-docker attach {name/id of container}
Stop a Container: (sudo) nvidia-docker stop {name/id of container}
Show all containers: (sudo) nvidia-docker container ls
Permanetly Remove stopped containers: (sudo) nvidia-docker container prune
To run the script provide the following: python vae_train.py --Lambda=.9 --epochs=50 --batch_size=32 --arch="vae_gn"
Current --arch
arguments are "vae"
, "vae_bn",
and "vae_gn"
.
Refer to models.py
.
Review and form deeper understanding:
- Disentangled VAE for Semi-supervised Learning
- Learning Structured Output Representation using Deep Conditional Generative Models
- A Tutorial on Information Maximizing Variational Autoencoders (InfoVAE)
- Conditional Variational Autoencoder: Intuition and Implementation
-
Update Logs accordingly loss = wi * loss
-
Loss function normalization.
Dividing each loss by 1000 did not yield desired results. Reconstruction loss has been decimated while LSTD loss is still rather large. Two Ideas:
-
Training for multiple gpus
-
Joint loss function ( Reconstruction + LSTD + Classifier)
-
Implement two separate optimizers. See here -
Add a proper Val dataset
-
Implement Testing and Reconstruction Result Visualization
- Parallel dataloading see here