Skip to main content
Top
Published in: International Journal of Computer Assisted Radiology and Surgery 3/2019

01-03-2019 | Original Article

A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation

Authors: Max-Heinrich Laves, Jens Bicker, Lüder A. Kahrs, Tobias Ortmaier

Published in: International Journal of Computer Assisted Radiology and Surgery | Issue 3/2019

Login to get access

Abstract

Purpose

Automated segmentation of anatomical structures in medical image analysis is a prerequisite for autonomous diagnosis as well as various computer- and robot-aided interventions. Recent methods based on deep convolutional neural networks (CNN) have outperformed former heuristic methods. However, those methods were primarily evaluated on rigid, real-world environments. In this study, existing segmentation methods were evaluated for their use on a new dataset of transoral endoscopic exploration.

Methods

Four machine learning-based methods SegNet, UNet, ENet and ErfNet were trained with supervision on a novel 7-class dataset of the human larynx. The dataset contains 536 manually segmented images from two patients during laser incisions. The Intersection-over-Union (IoU) evaluation metric was used to measure the accuracy of each method. Data augmentation and network ensembling were employed to increase segmentation accuracy. Stochastic inference was used to show uncertainties of the individual models. Patient-to-patient transfer was investigated using patient-specific fine-tuning.

Results

In this study, a weighted average ensemble network of UNet and ErfNet was best suited for the segmentation of laryngeal soft tissue with a mean IoU of 84.7%. The highest efficiency was achieved by ENet with a mean inference time of 9.22 ms per image. It is shown that 10 additional images from a new patient are sufficient for patient-specific fine-tuning.

Conclusion

CNN-based methods for semantic segmentation are applicable to endoscopic images of laryngeal soft tissue. The segmentation can be used for active constraints or to monitor morphological changes and autonomously detect pathologies. Further improvements could be achieved by using a larger dataset or training the models in a self-supervised manner on additional unlabeled data.
Appendix
Available only for authorised users
Literature
2.
7.
go back to reference Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, Franke U, Roth S, Schiele B (2016) The cityscapes dataset for semantic urban scene understanding. In: IEEE conference on computer vision and pattern recognition, pp 3213–3223. https://doi.org/10.1109/CVPR.2016.350 Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, Franke U, Roth S, Schiele B (2016) The cityscapes dataset for semantic urban scene understanding. In: IEEE conference on computer vision and pattern recognition, pp 3213–3223. https://​doi.​org/​10.​1109/​CVPR.​2016.​350
11.
go back to reference Gal Y, Ghahramani Z (2016) Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the 33rd international conference on machine learning, vol 48, pp 1050–1059 Gal Y, Ghahramani Z (2016) Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the 33rd international conference on machine learning, vol 48, pp 1050–1059
12.
go back to reference García-Peraza-Herrera LC, Li W, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Poorten EV, Stoyanov D, Vercauteren T, Ourselin S (2017) Real-time segmentation of non-rigid surgical tools based on deep learning and tracking. In: Lecture Notes on Computer Science LNCS, vol 10170, pp 84–95. https://doi.org/10.1007/978-3-319-54057-3_8 García-Peraza-Herrera LC, Li W, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Poorten EV, Stoyanov D, Vercauteren T, Ourselin S (2017) Real-time segmentation of non-rigid surgical tools based on deep learning and tracking. In: Lecture Notes on Computer Science LNCS, vol 10170, pp 84–95. https://​doi.​org/​10.​1007/​978-3-319-54057-3_​8
15.
go back to reference Kendall A, Gal Y (2017) What uncertainties do we need in Bayesian deep learning for computer vision? Adv Neural Inf Process Syst 30:5574–5584 Kendall A, Gal Y (2017) What uncertainties do we need in Bayesian deep learning for computer vision? Adv Neural Inf Process Syst 30:5574–5584
21.
go back to reference Osma-Ruiz V, Godino-Llorente JI, Sáenz-Lechón N, Fraile R (2008) Segmentation of the glottal space from laryngeal images using the watershed transform. Comput Med Imaging Graph 32(3):193–201CrossRefPubMed Osma-Ruiz V, Godino-Llorente JI, Sáenz-Lechón N, Fraile R (2008) Segmentation of the glottal space from laryngeal images using the watershed transform. Comput Med Imaging Graph 32(3):193–201CrossRefPubMed
25.
29.
go back to reference Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention—MICCAI 2015, pp 234–241 Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention—MICCAI 2015, pp 234–241
Metadata
Title
A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation
Authors
Max-Heinrich Laves
Jens Bicker
Lüder A. Kahrs
Tobias Ortmaier
Publication date
01-03-2019
Publisher
Springer International Publishing
Published in
International Journal of Computer Assisted Radiology and Surgery / Issue 3/2019
Print ISSN: 1861-6410
Electronic ISSN: 1861-6429
DOI
https://doi.org/10.1007/s11548-018-01910-0

Other articles of this Issue 3/2019

International Journal of Computer Assisted Radiology and Surgery 3/2019 Go to the issue