Skip to content

oshholail/X-Change

 
 

Repository files navigation

$\mathcal{X} $-Change: Cross-Attention Transmits Relevance in Remote Sensing Change Detection

💬 Network Architecture

image-overallarchitecture

💬 Quantitative & Qualitative Results on LEVIR-CD, DSIFN-CD and S2Looking

image-QuantitativeResult

image-QualitativeResult

💬 Requirements

Python==3.8
PyTorch==1.12
torchvision==0.13
apex==0.01

Please see requirements.txt for all the other requirements.

You can create a virtual conda environment with the following command:

conda create -n {env_name} -f {path_of_requirements.txt}
conda activate {env_name}

💬 Installation

Clone this repository:

git clone https://github.com/RSChangeDetection/X-Cross.git
cd X-Cross

💬 Download Dataset

You can download the datasets at the link below:

LEVIR-CD: click here to download

DSIFN-CD: click here to download

S2Looking: click here to download

You can also download the pre-processed datasets at the link below for training:

LEVIR-CD-256: click here to download

DSIFN-CD-256: click here to download

PS: The downloaded dataset's organization differs from ours, thus you cannot conduct the training procedure directly with it. Please refer to Dataset Preparation.

💬 Quick Start on LEVIR

Our $\mathcal{X}$-Change pre-trained model weights are available at Google Driver.

After downloading the pre-trained model, you should change --checkpoint_path

Then, start with running LEVIR dataset testing as follows:

python test.py

After that, you can find the detection results in result

💬 Train on LEVIR

We initialize the parameters of model's backbone with model pre-trained on ImageNet, $i.e.$ $ResNets$.

You can download the pre-trained model here: ResNet 18, ResNet 34 , ResNet 50, ResNet 101

Then, put the weight in checkpoints/saves

You can find the training script train_pipeline.sh in the folder script. You can run the script file by bash script/train_pieline.sh in terminal.

Details of train_pieline.sh are as follows:

#! /bin/bash
cd ..
python train.py --epoch={_epoch_num_} --lr={_learning_rate_} --root='data/{_dataset_}/'
python train.py --epoch={_epoch_num_} --lr={_learning_rate_} --enable_x_cross --resume --checkpoint_path='checkpoints/run/**.pth' --root='data/{_dataset_}/'

💬 Train on DSIFN and S2Looking

Follow the similar procedure mentioned for LEVIR. Run train_pipeline.sh to train on DSIFN or S2Looking after modifying lrroot, and the items you are supposed to change can be found in the paper.

Note: There may also exists some parameters need you to modify following the train_pipeline.sh. Please make sure you have modified all these parameters properly.

💬 Evaluate on LEVIR

We have some samples from LEVIR-CD dataset in test_samples for a quick start.

You can find the evaluation script file sample_test.sh in the folder script. You can run the bash file in terminal to test the samples we provided:

bash sample_test.sh

Note: You should download the weights first, and modify the sample_test.sh before you run it.

Details of sample_test.sh are as follows:

#! /bin/bash
cd ..
python test.py --save_result --save_iou_map --root='test_samples/' --checkpoint_path='checkpoints/saves/**.pth'

Evaluate on DSIFN and S2Looking

Follow the similar procedure mentioned for LEVIR. Run sample_test.sh to evaluate on DSIFN or S2Looking after modifying checkpoint_path, and the items you are supposed to change can be found in the paper.

Note: Samples from the DSIFN and S2Looking databases should be entered by the user as we do not offer weights or samples for these datasets. For testing and evaluation, you must first train your own DSIFN and S2Looking weight.

The warning in training is also need to be noted.

💬 Dataset Preparation

👉 Data Structure

"""
Datasets of Change Detection
├————train
|      ├———A  
|      ├———B
|      ├———label
|
├————val
|      ├————...
|
├————test
|      ├————...
"""

Your dataset are wished to be like this, and you can turn the organization to this according to label file **.txt in list , or you can modify change_dataloader.py in change and train.py to adapt to your datasets organization style.

A means the directory of pre-changed images

B means the directory post-changed images

label means the directory of change masks

💬 License

Code is released for non-commercial and research purposes only. For commercial purposes, please contact the authors.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.3%
  • Other 0.7%