Skip to content

Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

License

Notifications You must be signed in to change notification settings

lastbasket/C2-Matching

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

C2-Matching (CVPR2021)

Python 3.7 pytorch

This repository contains the implementation of the following paper:

Robust Reference-based Super-Resolution via C2-Matching
Yuming Jiang, Kelvin C.K. Chan, Xintao Wang, Chen Change Loy, Ziwei Liu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021

[Paper] [Project Page] [WR-SR Dataset]

Overview

overall_structure

Dependencies and Installation

  • Python == 3.9.7
  • PyTorch == 1.9.0
  • CUDA >= 10.2
  • GCC >= 7.5.0
  1. Clone Repo

    git clone [email protected]:yumingj/C2-Matching.git
  2. Create Conda Environment

    conda create --name c2_matching python=3.9.7
    conda activate c2_matching
  3. Install Dependencies

    cd C2-Matching
    pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
    pip install mmcv==0.4.4
    pip install -r requirements.txt
  4. Install MMSR and DCNv2

    python setup.py develop --user
    cd mmsr/models/archs/DCNv2
    python setup.py build develop --user

Dataset Preparation

Please refer to Datasets.md for pre-processing and more details.

Get Started

Pretrained Models

Downloading the pretrained models from this link and put them under experiments/pretrained_models folder.

Test

We provide quick test code with the pretrained model.

  1. Modify the paths to dataset and pretrained model in the following yaml files for configuration.

    ./options/test/test_C2_matching.yml
    ./options/test/test_C2_matching_mse.yml
  2. Run test code for models trained using GAN loss.

    python mmsr/test.py -opt "options/test/test_C2_matching.yml"

    Check out the results in ./results.

  3. Run test code for models trained using only reconstruction loss.

    python mmsr/test.py -opt "options/test/test_C2_matching_mse.yml"

    Check out the results in in ./results

Train

All logging files in the training process, e.g., log message, checkpoints, and snapshots, will be saved to ./experiments and ./tb_logger directory.

  1. Modify the paths to dataset in the following yaml files for configuration.

    ./options/train/stage1_teacher_contras_network.yml
    ./options/train/stage2_student_contras_network.yml
    ./options/train/stage3_restoration_gan.yml
  2. Stage 1: Train teacher contrastive network.

    python mmsr/train.py -opt "options/train/stage1_teacher_contras_network.yml"
  3. Stage 2: Train student contrastive network.

    # add the path to *pretrain_model_teacher* in the following yaml
    # the path to *pretrain_model_teacher* is the model obtained in stage1
    ./options/train/stage2_student_contras_network.yml
    python mmsr/train.py -opt "options/train/stage2_student_contras_network.yml"
  4. Stage 3: Train restoration network.

    # add the path to *pretrain_model_feature_extractor* in the following yaml
    # the path to *pretrain_model_feature_extractor* is the model obtained in stage2
    ./options/train/stage3_restoration_gan.yml
    python mmsr/train.py -opt "options/train/stage3_restoration_gan.yml"
    
    # if you wish to train the restoration network with only mse loss
    # prepare the dataset path and pretrained model path in the following yaml
    ./options/train/stage3_restoration_mse.yml
    python mmsr/train.py -opt "options/train/stage3_restoration_mse.yml"

Visual Results

For more results on the benchmarks, you can directly download our C2-Matching results from here.

result

Webly-Reference SR Dataset

Check out our Webly-Reference (WR-SR) SR Dataset through this link! We also provide the baseline results for a quick comparison in this link.

Webly-Reference SR dataset is a test dataset for evaluating Ref-SR methods. It has the following advantages:

  • Collected in a more realistic way: Reference images are searched using Google Image.
  • More diverse than previous datasets.

result

Citation

If you find our repo useful for your research, please consider citing our paper:

@inproceedings{jiang2021robust,
  title={Robust Reference-based Super-Resolution via C2-Matching},
  author={Jiang, Yuming and Chan, Kelvin CK and Wang, Xintao and Loy, Chen Change and Liu, Ziwei},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={2103--2112},
  year={2021}
}

License and Acknowledgement

This project is open sourced under MIT license. The code framework is mainly modified from BasicSR and MMSR (Now reorganized as MMEditing). Please refer to the original repo for more usage and documents.

Contact

If you have any question, please feel free to contact us via [email protected].

About

Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 65.5%
  • Cuda 21.9%
  • C++ 10.7%
  • C 1.3%
  • Other 0.6%