This is the Pytorch implementation of Progressive Attentional Manifold Alignment.
1/14/2022 Thanks to github user AK391 for making a web demo for PAMA.
1/18/2022 New checkpoints are available. (w/o color loss, 1.5x color loss weight, 1.5x content loss weight)
- python 3.6
- pytorch 1.2.0+
- PIL, numpy, matplotlib
Please download the pre-trained checkpoints at google drive and put them in ./checkpoints.
Here we also provide some other pre-trained results with different loss weights:
Type | Loss | Download |
---|---|---|
high consistency | w/o color loss | PAMA_without_color.zip |
high color | 1.5x color loss weight | PAMA_1.5x_color.zip |
high content | 1.5x content loss weight | PAMA_1.5x_content.zip |
The checkpionts will be uploaded recently.
The training set consists of two parts, the content images from COCO2014 and style images from Wikiart.
python main.py train --lr 1e-4 --content_folder ./COCO2014 --style_folder ./Wikiart
To test the code, you need to specify the path of the content image and the style image.
python main.py eval --content ./content/1.jpg --style ./style/1.jpg
If you want to do a batch operation for all pictures under the folder at one time, please execute the following code.
python main.py eval --run_folder True --content ./content/ --style ./style/
The results prove the quality of PAMA from three dimensions: Regional Consistency, Content Proservation, Style Quality.