Skip to main content
Top
Published in:

18-12-2023 | Gallbladder Cancer | Original Article

Deep-learning models for differentiation of xanthogranulomatous cholecystitis and gallbladder cancer on ultrasound

Published in: Indian Journal of Gastroenterology | Issue 4/2024

Login to get access

Abstract

Background

The radiological differentiation of xanthogranulomatous cholecystitis (XGC) and gallbladder cancer (GBC) is challenging yet critical. We aimed at utilizing the deep learning (DL)-based approach for differentiating XGC and GBC on ultrasound (US).

Methods

This single-center study comprised consecutive patients with XGC and GBC from a prospectively acquired database who underwent pre-operative US evaluation of the gallbladder lesions. The performance of state-of-the-art (SOTA) DL models (GBCNet-convolutional neural network [CNN] and RadFormer, transformer) for XGC vs. GBC classification in US images was tested and compared with popular DL models and a radiologist.

Results

Twenty-five patients with XGC (mean age, 57 ± 12.3, 17 females) and 55 patients with GBC (mean age, 54.6 ± 11.9, 38 females) were included. The performance of GBCNet and RadFormer was comparable (sensitivity 89.1% vs. 87.3%, p = 0.738; specificity 72% vs. 84%, p = 0.563; and AUC 0.744 vs. 0.751, p = 0.514). The AUCs of DenseNet-121, vision transformer (ViT) and data-efficient image transformer (DeiT) were significantly smaller than of GBCNet (p = 0.015, 0.046, 0.013, respectively) and RadFormer (p = 0.012, 0.027, 0.007, respectively). The radiologist labeled US images of 24 (30%) patients non-diagnostic. In the remaining patients, the sensitivity, specificity and AUC for GBC detection were 92.7%, 35.7% and 0.642, respectively. The specificity of the radiologist was significantly lower than of GBCNet and RadFormer (p = 0.001).

Conclusion

SOTA DL models have a better performance than radiologists in differentiating XGC and GBC on the US.

Graphical abstract

Literature
9.
11.
go back to reference He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. pp. 770–8. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. pp. 770–8.
12.
go back to reference Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. pp. 4700–8. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. pp. 4700–8.
13.
go back to reference Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929; 2020. Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:​2010.​11929; 2020.
14.
go back to reference Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H. Training data-efficient image transformers & distillation through attention. In International conference on machine learning; 2021. pp. 10347–57. PMLR. Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H. Training data-efficient image transformers & distillation through attention. In International conference on machine learning; 2021. pp. 10347–57. PMLR.
15.
go back to reference Liu Z, Lin Y, Cao Y, et al. Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision; 2021. pp. 10012–22. Liu Z, Lin Y, Cao Y, et al. Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision; 2021. pp. 10012–22.
20.
go back to reference Basu S, Singla S, Gupta M, Rana P, Gupta P, Arora C. Unsupervised contrastive learning of image representations from ultrasound videos with hard negative mining. In International Conference on Medical Image Computing and Computer-Assisted Intervention; 2022. pp. 423–33. Cham: Springer Nature Switzerland. Basu S, Singla S, Gupta M, Rana P, Gupta P, Arora C. Unsupervised contrastive learning of image representations from ultrasound videos with hard negative mining. In International Conference on Medical Image Computing and Computer-Assisted Intervention; 2022. pp. 423–33. Cham: Springer Nature Switzerland.
Metadata
Title
Deep-learning models for differentiation of xanthogranulomatous cholecystitis and gallbladder cancer on ultrasound
Publication date
18-12-2023
Published in
Indian Journal of Gastroenterology / Issue 4/2024
Print ISSN: 0254-8860
Electronic ISSN: 0975-0711
DOI
https://doi.org/10.1007/s12664-023-01483-0

Other articles of this Issue 4/2024

Indian Journal of Gastroenterology 4/2024 Go to the issue

A quick guide to ECGs

Electrocardiography Training Course

Improve your ECG interpretation skills with this comprehensive, rapid, interactive course. Expert advice provides detailed feedback as you work through 50 ECGs covering the most common cardiac presentations to ensure your practice stays up to date. 

PD Dr. Carsten W. Israel
Developed by: Springer Medizin
Start the cases

At a glance: The STEP trials

Obesity Clinical Trial Summary

A round-up of the STEP phase 3 clinical trials evaluating semaglutide for weight loss in people with overweight or obesity.

Developed by: Springer Medicine
Read more