Skip to content

arnavkj1995/face_inpainting

Repository files navigation

Consistent_Inpainting_GAN

 This is the training code of our paper on consistent semantic inpainting of faces in Tensorflow | Paper. We show that conditioning GANs with facial semantic maps helps in :

  • Better image generation capability of generator
  • Better PSNR and visual performance during inpainting
  • Decoupling of pose and appearance of face
  • Consistency in inpainting
  • We improve upon DIP, CVPR-2017

Dependencies

  • Tensorflow >= 1.0
  • Dlib (for facial keypoint detection)
  • pyamg (for Poisson Blending)

Preprocessing

  • Download the celebA dataset from here and move 2560 images to data/test/images and others to data/train/images.

  • Download and extract the trained facial shape predictor from here.

  • Run the script to create tfrecords for training the model:

    python preprocess_train_images.py shape_predictor_68_face_landmarks.dat

  • Run the script to generate keypoints maps for test images:

    python preprocess_test_images.py shape_predictor_68_face_landmarks.dat

Training

  • For training the model:

    python main.py --batch_size=64 --output_size=128

  • The generated samples with the facial keypoint maps are saved in samples/celebA.

  • To run the completeness experiment:

    python complete.py --batch_size=64 --output_size=128

  • To run the consistency experiment:

    python temporal.py --batch_size=64 --output_size=128

Independence of Pose and Appearance

We show that conditioning GANs with facial maps helps in decoupling apperance of face(skin textures, gender) from pose (scale, orientation, facial global expression)

  • Different z vector but same facial maps

  • Different facial maps but same z vector

Visualization of Consistency

We evaluated "consistency" on pseudo corrupted sequences.

  • Given an original starting image, corrupt it with different masks
  • Inpaint the corrupted images
  • Ideally all reconstructions should be identical
  • Parwise MSE between inpainted images gives a measure of consistency (Refer to paper for metrics)

Citation

If you find our work useful in your research, please cite:

@article{lahiri2017improving,
    title={Improving Consistency and Correctness of Sequence Inpainting using Semantically Guided Generative Adversarial Network},
    author={Lahiri, Avisek and Jain, Arnav and Biswas, Prabir Kumar and Mitra, Pabitra},
    journal={arXiv preprint arXiv:1711.06106},
    year={2017}
}