Skip to main content
Top

18-12-2023 | Gallbladder Cancer | Original Article

Deep-learning models for differentiation of xanthogranulomatous cholecystitis and gallbladder cancer on ultrasound

Authors: Pankaj Gupta, Soumen Basu, Thakur Deen Yadav, Lileswar Kaman, Santosh Irrinki, Harjeet Singh, Gaurav Prakash, Parikshaa Gupta, Ritambhra Nada, Usha Dutta, Manavjit Singh Sandhu, Chetan Arora

Published in: Indian Journal of Gastroenterology

Login to get access

Abstract

Background

The radiological differentiation of xanthogranulomatous cholecystitis (XGC) and gallbladder cancer (GBC) is challenging yet critical. We aimed at utilizing the deep learning (DL)-based approach for differentiating XGC and GBC on ultrasound (US).

Methods

This single-center study comprised consecutive patients with XGC and GBC from a prospectively acquired database who underwent pre-operative US evaluation of the gallbladder lesions. The performance of state-of-the-art (SOTA) DL models (GBCNet-convolutional neural network [CNN] and RadFormer, transformer) for XGC vs. GBC classification in US images was tested and compared with popular DL models and a radiologist.

Results

Twenty-five patients with XGC (mean age, 57 ± 12.3, 17 females) and 55 patients with GBC (mean age, 54.6 ± 11.9, 38 females) were included. The performance of GBCNet and RadFormer was comparable (sensitivity 89.1% vs. 87.3%, p = 0.738; specificity 72% vs. 84%, p = 0.563; and AUC 0.744 vs. 0.751, p = 0.514). The AUCs of DenseNet-121, vision transformer (ViT) and data-efficient image transformer (DeiT) were significantly smaller than of GBCNet (p = 0.015, 0.046, 0.013, respectively) and RadFormer (p = 0.012, 0.027, 0.007, respectively). The radiologist labeled US images of 24 (30%) patients non-diagnostic. In the remaining patients, the sensitivity, specificity and AUC for GBC detection were 92.7%, 35.7% and 0.642, respectively. The specificity of the radiologist was significantly lower than of GBCNet and RadFormer (p = 0.001).

Conclusion

SOTA DL models have a better performance than radiologists in differentiating XGC and GBC on the US.

Graphical abstract

Literature
9.
11.
go back to reference He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. pp. 770–8. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. pp. 770–8.
12.
go back to reference Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. pp. 4700–8. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. pp. 4700–8.
13.
go back to reference Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929; 2020. Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:​2010.​11929; 2020.
14.
go back to reference Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H. Training data-efficient image transformers & distillation through attention. In International conference on machine learning; 2021. pp. 10347–57. PMLR. Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jégou H. Training data-efficient image transformers & distillation through attention. In International conference on machine learning; 2021. pp. 10347–57. PMLR.
15.
go back to reference Liu Z, Lin Y, Cao Y, et al. Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision; 2021. pp. 10012–22. Liu Z, Lin Y, Cao Y, et al. Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision; 2021. pp. 10012–22.
20.
go back to reference Basu S, Singla S, Gupta M, Rana P, Gupta P, Arora C. Unsupervised contrastive learning of image representations from ultrasound videos with hard negative mining. In International Conference on Medical Image Computing and Computer-Assisted Intervention; 2022. pp. 423–33. Cham: Springer Nature Switzerland. Basu S, Singla S, Gupta M, Rana P, Gupta P, Arora C. Unsupervised contrastive learning of image representations from ultrasound videos with hard negative mining. In International Conference on Medical Image Computing and Computer-Assisted Intervention; 2022. pp. 423–33. Cham: Springer Nature Switzerland.
Metadata
Title
Deep-learning models for differentiation of xanthogranulomatous cholecystitis and gallbladder cancer on ultrasound
Authors
Pankaj Gupta
Soumen Basu
Thakur Deen Yadav
Lileswar Kaman
Santosh Irrinki
Harjeet Singh
Gaurav Prakash
Parikshaa Gupta
Ritambhra Nada
Usha Dutta
Manavjit Singh Sandhu
Chetan Arora
Publication date
18-12-2023
Publisher
Springer India
Published in
Indian Journal of Gastroenterology
Print ISSN: 0254-8860
Electronic ISSN: 0975-0711
DOI
https://doi.org/10.1007/s12664-023-01483-0
Live Webinar | 27-06-2024 | 18:00 (CEST)

Keynote webinar | Spotlight on medication adherence

Live: Thursday 27th June 2024, 18:00-19:30 (CEST)

WHO estimates that half of all patients worldwide are non-adherent to their prescribed medication. The consequences of poor adherence can be catastrophic, on both the individual and population level.

Join our expert panel to discover why you need to understand the drivers of non-adherence in your patients, and how you can optimize medication adherence in your clinics to drastically improve patient outcomes.

Prof. Kevin Dolgin
Prof. Florian Limbourg
Prof. Anoop Chauhan
Developed by: Springer Medicine
Obesity Clinical Trial Summary

At a glance: The STEP trials

A round-up of the STEP phase 3 clinical trials evaluating semaglutide for weight loss in people with overweight or obesity.

Developed by: Springer Medicine