The proposed method is implemented using PyTorch. All experiments were conducted on a tower workstation equipped with an Intel Core i5-11400KF and an NVIDIA GeForce RTX 3070.
- Evaluation metrics on the HAM10000 (Augment).
Evaluation metrics | Comparison with other methods | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
2 Evaluation metrics on the HAM10000.
1). Evaluation metrics and LCK
Evaluation metrics | LKC(large-kernel convolution) | |||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
LKC with different kernel sizes.
The N of the label ”kernel-N” indicates the size of the convolution kernel.
For instance, kernel-21 means using an LKC with a 21×21 convolution kernel.
2). Attention.
3 Generalization Performance
Dataset: https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database
The COVID-19 Radiography Database consisted of 21165 images.
Among them, covid(3616),normal(10192),opacity(6012),viral(1345).
Evaluation Metrics | Distribution of the COVID-19 Radiography Dataset | ||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Source Data: http://dx.doi.org/10.5281/zenodo.1214456
Jakob Nikolas Kather, Johannes Krisam, et al., "Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study," PLOS Medicine, vol. 16, no. 1, pp. 1–22, 01 2019.
This is a slightly different version of the "NCT-CRC-HE-100K" image set: This set contains 100,000 images in 9 tissue classes at 0.5 MPP and was created from the same raw data as "NCT-CRC-HE-100K".
However, no color normalization was applied to these images. Consequently, staining intensity and color slightly varies between the images. Please note that although this image set was created from the same data as "NCT-CRC-HE-100K", the image regions are not completely identical because the selection of non-overlapping tiles from raw images was a stochastic process.
Evaluation Metrics | NCT-CRC-HE-100K-NONORM | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
The distribution of the seven disease types before and after data augmentation. In the clusters of bars with the same color, the left bar represents the sample distribution after data augmentation, while the right bar represents the initial distribution of the dataset.
Example of Skin lesions in HAM10000 dataset.
Among them, BKL, DF, NV, and VASC are benign tumors, whereas AKIEC, BCC, and MEL are malignant tumors.
Available:
https://challenge.isic-archive.com/data/#2018
https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T
https://aistudio.baidu.com/aistudio/datasetdetail/218024 (ours)
Citation:
P. Tschandl, C. Rosendahl, and H. Kittler, “The ham10000 dataset,a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” Scientific data, vol. 5, no. 1, pp. 1–9, 2018.
The dataset is released under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ .
a.
@article{WangIMCC,
author={Wang, Sutong and Yin, Yunqiang and Wang, Dujuan and Wang, Yanzhang and Jin, Yaochu},
journal={IEEE Transactions on Cybernetics},
title={Interpretability-Based Multimodal Convolutional Neural Networks for Skin Lesion Diagnosis},
year={2022},
volume={52},
number={12},
pages={12623-12637},
doi={10.1109/TCYB.2021.3069920}
}
b
@article{xia2017exploring,
title={Exploring Web images to enhance skin disease analysis under a computer vision framework},
author={Xia, Yingjie and Zhang, Luming and Meng, Lei and Yan, Yan and Nie, Liqiang and Li, Xuelong},
journal={IEEE Transactions on Cybernetics},
volume={48},
number={11},
pages={3080--3091},
year={2017},
publisher={IEEE}
}
If you use our method for your research or aplication, please consider citation:
@ARTICLE{LanCapsNets,
author={Lan, Zhangli and Cai, Songbai and Zhu, Jiqiang and Xu, Yuantong},
journal={XXX on XXX},
title={A Novel Skin Cancer Assisted Diagnosis Method based on Capsule Networks with CBAM},
year={},
volume={},
number={},
pages={},
doi={10.36227/techrxiv.23291003},
}
@ARTICLE{9791221,
author={Lan, Zhangli and Cai, Songbai and He, Xu and Wen, Xinpeng},
journal={IEEE Access},
title={FixCaps: An Improved Capsules Network for Diagnosis of Skin Cancer},
year={2022},
volume={10},
number={},
pages={76261-76267},
doi={10.1109/ACCESS.2022.3181225}
}