This is the training code of our paper on consistent semantic inpainting of faces in Tensorflow | Paper. We show that conditioning GANs with facial semantic maps helps in :
- Better image generation capability of generator
- Better PSNR and visual performance during inpainting
- Decoupling of pose and appearance of face
- Consistency in inpainting
- We improve upon DIP, CVPR-2017
- Tensorflow >= 1.0
- Dlib (for facial keypoint detection)
- pyamg (for Poisson Blending)
-
Download the celebA dataset from here and move 2560 images to
data/test/images
and others todata/train/images
. -
Download and extract the trained facial shape predictor from here.
-
Run the script to create tfrecords for training the model:
python preprocess_train_images.py shape_predictor_68_face_landmarks.dat
-
Run the script to generate keypoints maps for test images:
python preprocess_test_images.py shape_predictor_68_face_landmarks.dat
-
For training the model:
python main.py --batch_size=64 --output_size=128
-
The generated samples with the facial keypoint maps are saved in
samples/celebA
. -
To run the completeness experiment:
python complete.py --batch_size=64 --output_size=128
-
To run the consistency experiment:
python temporal.py --batch_size=64 --output_size=128
We show that conditioning GANs with facial maps helps in decoupling apperance of face(skin textures, gender) from pose (scale, orientation, facial global expression)
We evaluated "consistency" on pseudo corrupted sequences.
- Given an original starting image, corrupt it with different masks
- Inpaint the corrupted images
- Ideally all reconstructions should be identical
- Parwise MSE between inpainted images gives a measure of consistency (Refer to paper for metrics)
If you find our work useful in your research, please cite:
@article{lahiri2017improving,
title={Improving Consistency and Correctness of Sequence Inpainting using Semantically Guided Generative Adversarial Network},
author={Lahiri, Avisek and Jain, Arnav and Biswas, Prabir Kumar and Mitra, Pabitra},
journal={arXiv preprint arXiv:1711.06106},
year={2017}
}