Table of contents generated with markdown-toc
You also can read Chinese version
Please run to get the code:
git clone --recursive https://github.com/xiaolanshu/CSCA-U-Net.git
This repository provides code for our paper "CSCA U-Net: A Channel and Space Compound Attention CNN for Medical Image Segmentation" accepted for Publication at Artificial Intelligence in Medicine.(papers)
Image segmentation is one of the vital steps in medical image analysis. A large number of methods based on convolutional neural networks have emerged, which can extract abstract features from multiple-modality medical images, learn valuable information that is difficult to recognize by humans, and obtain more reliable results than traditional image segmentation approaches. U-Net, due to its simple structure and excellent performance, is widely used in medical image segmentation. In this paper, to further improve the performance of U-Net, we propose a channel and space compound attention (CSCA) convolutional neural network, abbreviated CSCA U-Net, which increases the network depth and employs a double squeeze-and-excitation (DSE) block in the bottleneck layer to enhance feature extraction and obtain more high-level semantic features. Moreover, the characteristics of the proposed method are three-fold: (1) channel and space compound attention (CSCA) block, (2) cross-layer feature fusion (CLFF), and (3) deep supervision (DS). Extensive experiments on several available medical image datasets, including Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, ETIS, CVC-T, 2018 Data Science Bowl (2018 DSB), ISIC 2018 Challenge, and JSUAH-Cerebellum, show that CSCA U-Net achieves competitive results and significantly improves the generalization performance.
In this subsection, we provide the public data set used in the paper:
- Polyp Datasets (include Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, ETIS, and CVC-300.) [From PraNet]:
- Total: [Aliyun], [Baidu]
- TrainDataset: [Google Drive]
- TestDataset: [Google Drive]
- 2018 Data Science Bowl: [Aliyun], [Baidu], [Google Drive]
- ISIC 2018 (original from [kaggle]. I converted the images in
.tiff
format to.png
format): [Aliyun], [Baidu], [Google Drive]
[Aliyun], [Baidu], [Google Drive]
[Aliyun], [Baidu], [Google Drive]
First of all, you need to have a pytorch
environment, I use pytorch 1.10
, but it should be possible to use a lower version, so you can decide for yourself.
You can also create a virtual environment using the following command (note: this virtual environment is named pytorch
, if you already have a virtual environment with this name on your system, you will need to change environment.yml
manually).
conda env create -f docs/enviroment.yml
You can run the following command directly:
sh run.sh ### use stepLR
sh run_cos.sh ### use CosineAnnealingLR
If you only want to run a single dataset, you can comment out the irrelevant parts of the sh
file, or just type something like the following command from the command line:
python Train.py --model_name CSCAUNet --epoch 121 --batchsize 16 --trainsize 352 --train_save CSCAUNet_Kvasir_1e4_bs16_e120_s352 --lr 0.0001 --train_path $dir/data/TrainDataset --test_path $dir/data/TestDataset/Kvasir/ # you need replace ur truely Datapath to $dir.
If you use a sh
file for training, it will be tested after the training is complete.
If you use the python
command for training, you can also comment out the training part of the sh
file, or just type something like the following command at the command line:
python Test.py --train_save CSCAUNet_Kvasir_1e4_bs16_e120_s352 --testsize 352 --test_path $dir/data/TestDataset
- For evaluating the polyp dataset, you can use the
matlab
code ineval
or use the evaluation code provided by [UACANet]. - For other datasets, you can use the code in evaldata.
- The reason for using a different evaluation code is to use the same methodology in the evaluation as other papers that did experiments on the dataset.
-
Qualitative Results
Please cite our paper if you find the work useful:
@article{shu2024102800,
title = {CSCA U-Net: A channel and space compound attention CNN for medical image segmentation},
journal = {Artificial Intelligence in Medicine},
volume = {150},
pages = {102800},
year = {2024},
issn = {0933-3657},
doi = {https://doi.org/10.1016/j.artmed.2024.102800},
url = {https://www.sciencedirect.com/science/article/pii/S0933365724000423},
author = {Xin Shu and Jiashu Wang and Aoping Zhang and Jinlong Shi and Xiao-Jun Wu},
keywords = {U-net, Channel and spatial compound attention, Cross-layer feature fusion, Deep supervision, Medical image segmentation}
}
- Many of the training strategies, datasets, and evaluation methods in this paper are based on PraNet. I admire the open source spirit of Dr. Deng-Ping Fan and other authors, and am very grateful for the help provided by this work on
PraNet
.
-
📫 reach me for email:[email protected]