A simple code for build and train a GAN model to convert photo's style.
This repo is an unofficial implementation of the paper AnimeGAN: A Novel Lightweight GAN for Photo Animation.I tried to construct the GAN structure with my own understanding and still working on it. Acctually it can be use to transfer images in any styles depending on the training and target data that been used. however, effective training techniques and dataset production are very important for good results.
python 3.6
tensorflow-gpu>=2.0(ubuntu, GPU 2080Ti)
opencv
numpy
- Download or clone this repo .
- Put the photos in the 'test_img' folder,run test.py, the script will use the pre-trained model to convert the photos into anime style.
- See the results in the 'results' folder.
- Prepare dataset, the directory structure including train_photo, Target style and smooth images.It is recommended to use at least 4000 training images and 2000 style images.
- run edge_smooth.py to generate the edage smooth images of the style images.
- run train-init.py to initialize the weight of the G_net.
- run train-gan.py to train the model.
Using different datasets and different parameter to train the models, you can get different image styles.
The effect of the model is as follows(convert photo to Anime Style).
Thanks to the contributors of AnimeGAN. This work is based on it.