Skip to content

The official pytorch implementation of Exploring the User Guidance for More Accurate Building Segmentation from High-Resolution Remote Sensing Images [JAG 2024]

License

Notifications You must be signed in to change notification settings

StephenDHYang/UGBS-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Exploring the User Guidance for More Accurate Building Segmentation

This repo is the official PyTorch implementation of Exploring the User Guidance for More Accurate Building Segmentation from High-Resolution Remote Sensing Images (paper link).

1. Requirements

  • Hardware: 4-8 GPUs (better with >=11G GPU memory)

  • Software: PyTorch>=1.0.0, Python3, tensorboardX, and so on.

  • to install tensorboardX, you could do as follows:

    pip install tensorboardX
    pip install tensorboard
    
  • install python packages: pip install -r requirements.txt

2. Dataset

Segmentation

Segmentation data structure
  DATA_ROOT
  ├── vegas
  │   ├── vegas_trainval_img_224_jpg
  │   ├── vegas_trainval_label_224_png_01
  │   ├── vegas_test_img_224_jpg
  │   ├── vegas_test_label_224_png_01
  ├── Inria_dataset
  │   ├── inria
  │   │   ├── img
  │   │   ├── inria_split
  │   │   ├── mask_all
  │   │   ├── mask_one
  │   │   ├── polygons.csv
  

3. Model Zoo

Segmentation

train: SpaceNet (Las Vegas)

Annotations Models IoU BF-score B-IoU Notes
Bounding box gdrive 93.7 80.7 39.9
Extreme points gdrive 95.3 87.3 48.5
Inside-outside points gdrive 95.4 86.3 49.4

train: Inria-building dataset

Annotations Models IoU WCov B-Fscore Dice Notes
Bounding box gdrive 92.1 92.2 84.3 95.9
Extreme points gdrive 93.1 93.1 86.9 96.4
Inside-outside points gdrive 92.8 92.9 86.0 96.3

4. Training

You can set the training and inference configs by editing the yaml files under ./configs/{dataset}/{task_name}

Train on a single machine:

Distributed:
  sync_bn: False # adopt sync_bn or not
  dist_url: tcp://127.0.0.1:12345
  dist_backend: 'nccl'
  multiprocessing_distributed: False
  ...

Train with distributed setting:

Distributed:
  sync_bn: True # adopt sync_bn or not
  dist_url: tcp://127.0.0.1:12345
  dist_backend: 'nccl'
  multiprocessing_distributed: True
  world_size: 1
  rank: 0
  use_apex: False # use apex for distributed training or not
  opt_level: 'O0'
  keep_batchnorm_fp32:
  loss_scale:

Run the training job:

sh {tool_name}/train.sh {dataset} {task_name}

An example of training UGBS on Vegas:

sh seg_tool/train_banet.sh vegas banet_iog_res101

5. Inference

Run on one GPU to evaluate the model, the examples are as follow:

Run the inference job:

sh {tool_name}/evaluate_banet.sh {dataset} {task_name}

An example of testing our method on Vegas:

sh seg_tool/evaluate_banet.sh vegas banet_dextr_res101

Note:

  1. To evaluate segmentation methods with the metrics of CVNet (IoU, WCov, BF-score and Dice), you shoud use seg_tool/evaluate_cvnet.sh

6. License

This project is under the MIT license.

7. Citation

@article{yang2024exploring,
  title={Exploring the user guidance for more accurate building segmentation from high-resolution remote sensing images},
  author={Yang, Dinghao and Wang, Bin and Li, Weijia and He, Conghui},
  journal={International Journal of Applied Earth Observation and Geoinformation},
  volume={126},
  pages={103609},
  year={2024},
  publisher={Elsevier}
}

About

The official pytorch implementation of Exploring the User Guidance for More Accurate Building Segmentation from High-Resolution Remote Sensing Images [JAG 2024]

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages