Skip to content

jlianglab/PEAC

Repository files navigation

Learning Anatomically Consistent Embedding for Chest Radiography

This is official code for our BMVC 2023 Oral paper:
Learning Anatomically Consistent Embedding for Chest Radiography

We have introduced a new self-supervised learning (SSL) method called PEAC (patch embedding of anatomical consistency). Compared with photographic images, medical images acquired with the same imaging protocol exhibit high consistency in anatomy. To exploit this anatomical consistency, we propose to learn global and local consistencies via stable grid-based matching, transfer pre-trained PEAC model to diverse downstream tasks. PEAC (1) achieves significantly better performance than the existing state-of-the-art fully-supervised and self-supervised methods, and (2) can effectively captures the anatomical structure consistency between patients of different genders and weights and between different views of the same patient which enhances the interpretability of our method for medical image analysis.

Image of framework

Publication

Learning Anatomically Consistent Embedding for Chest Radiography
Ziyu Zhou1,2, Haozhe Luo2, Jiaxuan Pang2, Xiaowei Ding1, Michael B. Gotway3, Jianming Liang2
1 Shanghai Jiao Tong University, 2 Arizona State University, 3 Mayo Clinic
(Ziyu Zhou and Haozhe Luo contribute equally for this paper.)

Paper (BMVC offcial publication, Full version with appendix) | Poster | Presentation

Dataset

  1. ChestX-ray14
  2. CheXpert
  3. RSNA Pneumonia
  4. Shenzhen
  5. JSRT

Pretrain PEAC models

Set image path and some hyper-parameters in .\utils\configs.py, and start pretraining full version of PEAC:

git clone [email protected]:jlianglab/PEAC.git
python global_local_popar_swin.py

Using DDP to pretrain PEAC:

CUDA_VISIBLE_DEVICES="4,5,6,7" python -m torch.distributed.launch --nproc_per_node 4 global_local_popar_swin_ddp.py

Pretrain the downgraded version of PEAC (without local loss):

python global_popar_swin.py

Pretrained weights:

Model name Backbone Input Resolution model log
PEAC-3 ViT-B 224x224 download
PEAC Swin-B 448x448 download training log

Finetune from PEAC model

Please refer this respository. All classification and segmentation codes are available.

Visualizations for testing the pretrained model

t-SNE

We labeled 54 landmarks on 1K images for ChestX-ray14 dataset, and we picked 7 of them (dot labels in the blow image) to test t-SNE visualization of our pretrained model:

tsne

cd visualizations
python tsne.py

Correspondence

We eatabilish anatomical correspondence across views, across body weights, across genders, and across health statuses.

Correspondence

cd visualizations
python correspondence.py

Co-segmentation (Zero-shot)

We semantically co-segment common structure of images in a zero-shot scenario. The cervico scapular region, upper lobe of lungs, lower lobe of lungs, mediastinum, and abdominal cavity are clearly segmented as common features.

Co-segmentation

cd visualizations
python cosegmentation.py

Citation

Please give us a star 🌟 on Github to support us!

If you use this code or use out pretrained weights for your research, please cite out paper:

@article{zhou2023learning,
  title={Learning Anatomically Consistent Embedding for Chest Radiography},
  author={Zhou, Ziyu and Luo, Haozhe and Pang, Jiaxuan and Ding, Xiaowei and Gotway, Michael and Liang, Jianming},
  year={2023}
}

Acknowledgement

This research has been supported in part by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and in part by the NIH under Award Number R01HL128785. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. This work has utilized the GPUs provided in part by the ASU Research Computing and in part by the Bridges-2 at Pittsburgh Supercomputing Center through allocation BCS190015 and the Anvil at Purdue University through allocation MED220025 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. The content of this paper is covered by patents pending.

License

Released under the ASU GitHub Project License.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages