Skip to main content
Top
Published in: Journal of Digital Imaging 4/2019

Open Access 01-08-2019 | Meningioma

Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks

Authors: J. N. Stember, H. Celik, E. Krupinski, P. D. Chang, S. Mutasa, B. J. Wood, A. Lignelli, G. Moonis, L. H. Schwartz, S. Jambawalikar, U. Bagci

Published in: Journal of Imaging Informatics in Medicine | Issue 4/2019

Login to get access

Abstract

Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology. As a proof of principle, our hypothesis was that segmentation masks generated with the help of eye tracking (ET) would be very similar to those rendered by hand annotation (HA). Additionally, our goal was to show that a CNN trained on ET masks would be equivalent to one trained on HA masks, the latter being the current standard approach. Step 1: Screen captures of 19 publicly available radiologic images of assorted structures within various modalities were analyzed. ET and HA masks for all regions of interest (ROIs) were generated from these image datasets. Step 2: Utilizing a similar approach, ET and HA masks for 356 publicly available T1-weighted postcontrast meningioma images were generated. Three hundred six of these image + mask pairs were used to train a CNN with U-net-based architecture. The remaining 50 images were used as the independent test set. Step 1: ET and HA masks for the nonneurological images had an average Dice similarity coefficient (DSC) of 0.86 between each other. Step 2: Meningioma ET and HA masks had an average DSC of 0.85 between each other. After separate training using both approaches, the ET approach performed virtually identically to HA on the test set of 50 images. The former had an area under the curve (AUC) of 0.88, while the latter had AUC of 0.87. ET and HA predictions had trimmed mean DSCs compared to the original HA maps of 0.73 and 0.74, respectively. These trimmed DSCs between ET and HA were found to be statistically equivalent with a p value of 0.015. We have demonstrated that ET can create segmentation masks suitable for deep learning semantic segmentation. Future work will integrate ET to produce masks in a faster, more natural manner that distracts less from typical radiology clinical workflow.
Literature
1.
go back to reference Kundel HL, Nodine CF, Carmody D: Visual scanning, pattern recognition and decision-making in pulmonary nodule detection. Investig Radiol 13(3):175–181, 1978CrossRef Kundel HL, Nodine CF, Carmody D: Visual scanning, pattern recognition and decision-making in pulmonary nodule detection. Investig Radiol 13(3):175–181, 1978CrossRef
2.
go back to reference Kundel HL, Nodine CF, Krupinski EA: Searching for lung nodules. Visual dwell indicates locations of false-positive and false-negative decisions. Investig Radiol 24(6):472–478, 1989CrossRef Kundel HL, Nodine CF, Krupinski EA: Searching for lung nodules. Visual dwell indicates locations of false-positive and false-negative decisions. Investig Radiol 24(6):472–478, 1989CrossRef
3.
go back to reference Nodine CF, Kundel HL, Lauver SC, Toto LC: Nature of expertise in searching mammograms for breast masses. Acad Radiol 3(12):1000–1006, 1996CrossRefPubMed Nodine CF, Kundel HL, Lauver SC, Toto LC: Nature of expertise in searching mammograms for breast masses. Acad Radiol 3(12):1000–1006, 1996CrossRefPubMed
4.
go back to reference Nodine CF, Krupinski EA: Perceptual skill, radiology expertise, and visual test performance with NINA and WALDO. Acad Radiol 5(9):603–612, 1998CrossRefPubMed Nodine CF, Krupinski EA: Perceptual skill, radiology expertise, and visual test performance with NINA and WALDO. Acad Radiol 5(9):603–612, 1998CrossRefPubMed
5.
go back to reference Krupinski EA, Tillack AA, Richter L, Henderson JT, Bhattacharyya AK, Scott KM, Graham AR, Descour MR, Davis JR, Weinstein RS: Eye-movement study and human performance using telepathology virtual slides: implications for medical education and differences with experience. Hum Pathol 37(12):1543–1556, 2006CrossRefPubMed Krupinski EA, Tillack AA, Richter L, Henderson JT, Bhattacharyya AK, Scott KM, Graham AR, Descour MR, Davis JR, Weinstein RS: Eye-movement study and human performance using telepathology virtual slides: implications for medical education and differences with experience. Hum Pathol 37(12):1543–1556, 2006CrossRefPubMed
6.
go back to reference Tourassi G, Voisin S, Paquit V, Krupinski E: Investigating the link between radiologists’ gaze, diagnostic decision, and image content. J Am Med Inform Assoc 20(6):1067–1075, 2013CrossRefPubMedPubMedCentral Tourassi G, Voisin S, Paquit V, Krupinski E: Investigating the link between radiologists’ gaze, diagnostic decision, and image content. J Am Med Inform Assoc 20(6):1067–1075, 2013CrossRefPubMedPubMedCentral
7.
go back to reference Auffermann WF, Krupinski EA, Tridandapani S: Search pattern training for evaluation of central venous catheter positioning on chest radiographs. J Med Imaging (Bellingham, Wash) 5(3):031407, 2018 Auffermann WF, Krupinski EA, Tridandapani S: Search pattern training for evaluation of central venous catheter positioning on chest radiographs. J Med Imaging (Bellingham, Wash) 5(3):031407, 2018
8.
go back to reference Mall S, Brennan PC, Mello-Thoms C: Modeling visual search behavior of breast radiologists using a deep convolution neural network. J Med Imaging 5(03):1, 2018CrossRef Mall S, Brennan PC, Mello-Thoms C: Modeling visual search behavior of breast radiologists using a deep convolution neural network. J Med Imaging 5(03):1, 2018CrossRef
9.
go back to reference Helbren E, Halligan S, Phillips P et al.: Towards a framework for analysis of eye-tracking studies in the three dimensional environment: A study of visual search by experienced readers of endoluminal CT colonography. J Radiol 87, 2014 Helbren E, Halligan S, Phillips P et al.: Towards a framework for analysis of eye-tracking studies in the three dimensional environment: A study of visual search by experienced readers of endoluminal CT colonography. J Radiol 87, 2014
10.
go back to reference Hermanson BP, Burgdorf GC, Hatton JF, Speegle DM, Woodmansey KF: Visual fixation and scan patterns of dentists viewing dental periapical radiographs: an eye tracking pilot study. J Endod 44(5):722–727, 2018CrossRefPubMed Hermanson BP, Burgdorf GC, Hatton JF, Speegle DM, Woodmansey KF: Visual fixation and scan patterns of dentists viewing dental periapical radiographs: an eye tracking pilot study. J Endod 44(5):722–727, 2018CrossRefPubMed
11.
go back to reference Hu CH, Kundel HL, Nodine CF, Krupinski EA, Toto LC: Searching for bone fractures: A comparison with pulmonary nodule search. Acad Radiol 1(1):25–32, 1994CrossRefPubMed Hu CH, Kundel HL, Nodine CF, Krupinski EA, Toto LC: Searching for bone fractures: A comparison with pulmonary nodule search. Acad Radiol 1(1):25–32, 1994CrossRefPubMed
12.
go back to reference McLaughlin L, Bond R, Hughes C, McConnell J, McFadden S: Computing eye gaze metrics for the automatic assessment of radiographer performance during X-ray image interpretation. Int J Med Inform 105:11–21, 2017CrossRefPubMed McLaughlin L, Bond R, Hughes C, McConnell J, McFadden S: Computing eye gaze metrics for the automatic assessment of radiographer performance during X-ray image interpretation. Int J Med Inform 105:11–21, 2017CrossRefPubMed
13.
go back to reference Iannessi A, Marcy P-Y, Clatz O, Bertrand A-S, Sugimoto M: A review of existing and potential computer user interfaces for modern radiology. Insights Imaging 9(4):599–609, 2018CrossRefPubMedPubMedCentral Iannessi A, Marcy P-Y, Clatz O, Bertrand A-S, Sugimoto M: A review of existing and potential computer user interfaces for modern radiology. Insights Imaging 9(4):599–609, 2018CrossRefPubMedPubMedCentral
14.
go back to reference Drew T, Williams LH, Aldred B, Heilbrun ME, Minoshima S: Quantifying the costs of interruption during diagnostic radiology interpretation using mobile eye-tracking glasses. J Med Imaging 5(03):1, 2018CrossRef Drew T, Williams LH, Aldred B, Heilbrun ME, Minoshima S: Quantifying the costs of interruption during diagnostic radiology interpretation using mobile eye-tracking glasses. J Med Imaging 5(03):1, 2018CrossRef
15.
go back to reference Drew T, Cunningham C, Wolfe JM: When and why might a computer-aided detection (CAD) system interfere with visual search? An eye-tracking study. Acad Radiol 19(10):1260–1267, 2012CrossRefPubMedPubMedCentral Drew T, Cunningham C, Wolfe JM: When and why might a computer-aided detection (CAD) system interfere with visual search? An eye-tracking study. Acad Radiol 19(10):1260–1267, 2012CrossRefPubMedPubMedCentral
16.
go back to reference Hanna TN, Zygmont ME, Peterson R et al.: The effects of fatigue from overnight shifts on radiology search patterns and diagnostic performance. J Am Coll Radiol, 2017 Hanna TN, Zygmont ME, Peterson R et al.: The effects of fatigue from overnight shifts on radiology search patterns and diagnostic performance. J Am Coll Radiol, 2017
17.
go back to reference Waite S, Kolla S, Jeudy J, Legasto A, Macknik SL, Martinez-Conde S, Krupinski EA, Reede DL: Tired in the reading room: The influence of fatigue in radiology. J Am Coll Radiol 14(2):191–197, 2017CrossRefPubMed Waite S, Kolla S, Jeudy J, Legasto A, Macknik SL, Martinez-Conde S, Krupinski EA, Reede DL: Tired in the reading room: The influence of fatigue in radiology. J Am Coll Radiol 14(2):191–197, 2017CrossRefPubMed
18.
go back to reference Khosravan N, Celik H, Turkbey B, Jones EC, Wood B, Bagci U: A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learning. Med Image Anal 51:101–115, 2019CrossRefPubMed Khosravan N, Celik H, Turkbey B, Jones EC, Wood B, Bagci U: A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learning. Med Image Anal 51:101–115, 2019CrossRefPubMed
19.
go back to reference Khosravan N, Celik H, Turkbey B, et al: Gaze2Segment: a pilot study for integrating eye-tracking technology into medical image segmentation. In: Bayesian and graphical Models for Biomedical Imaging International MICCAI Workshop on Medical Computer Vision 2016: Medical Computer Vision and Bayesian and Graphical Models for Biomedical Imaging, pp 94–104 Khosravan N, Celik H, Turkbey B, et al: Gaze2Segment: a pilot study for integrating eye-tracking technology into medical image segmentation. In: Bayesian and graphical Models for Biomedical Imaging International MICCAI Workshop on Medical Computer Vision 2016: Medical Computer Vision and Bayesian and Graphical Models for Biomedical Imaging, pp 94–104
20.
go back to reference Sahiner B, Pezeshk A, Hadjiiski LM, et al: Deep learning in medical imaging and radiation therapy. Med Phys 46(1):e1–e36, 2019CrossRefPubMed Sahiner B, Pezeshk A, Hadjiiski LM, et al: Deep learning in medical imaging and radiation therapy. Med Phys 46(1):e1–e36, 2019CrossRefPubMed
21.
go back to reference Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI: A survey on deep learning in medical image analysis. Med Image Anal 42:60–88, 2017CrossRef Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI: A survey on deep learning in medical image analysis. Med Image Anal 42:60–88, 2017CrossRef
22.
go back to reference Suzuki K: Overview of deep learning in medical imaging. Radiol Phys Technol 10(3):257–273, 2017CrossRef Suzuki K: Overview of deep learning in medical imaging. Radiol Phys Technol 10(3):257–273, 2017CrossRef
23.
go back to reference Prevedello LM, Erdal BS, Ryu JL, Little KJ, Demirer M, Qian S, White RD: Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology 285(3):923–931, 2017CrossRefPubMed Prevedello LM, Erdal BS, Ryu JL, Little KJ, Demirer M, Qian S, White RD: Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology 285(3):923–931, 2017CrossRefPubMed
24.
28.
go back to reference Ronneberger O, Fischer P, Brox T: U-Net: Convolutional networks for biomedical image segmentation. arXiv:1505.04597v1 [cs.CV] Ronneberger O, Fischer P, Brox T: U-Net: Convolutional networks for biomedical image segmentation. arXiv:1505.04597v1 [cs.CV]
29.
go back to reference Dalmış MU, Litjens G, Holland K, Setio A, Mann R, Karssemeijer N, Gubern-Mérida A: Using deep learning to segment breast and fibroglandular tissue in MRI volumes. Med Phys 44(2):533–546, 2017CrossRefPubMed Dalmış MU, Litjens G, Holland K, Setio A, Mann R, Karssemeijer N, Gubern-Mérida A: Using deep learning to segment breast and fibroglandular tissue in MRI volumes. Med Phys 44(2):533–546, 2017CrossRefPubMed
30.
go back to reference Sadegh S, Salehi M, Erdogmus D, Gholipour A: Auto-context convolutional neural network (auto-net) for brain extraction in magnetic resonance imaging. arXiv:1703.02083v2 [cs.CV] Sadegh S, Salehi M, Erdogmus D, Gholipour A: Auto-context convolutional neural network (auto-net) for brain extraction in magnetic resonance imaging. arXiv:1703.02083v2 [cs.CV]
31.
go back to reference Venhuizen FG, Van Ginneken B, Liefers B et al.: Optical coherence tomography; (100.4996) Pattern recognition, neural networks; (100.2960) Image analysis; (170.4470) Clinical applications; (170.4470) Ophthalmology. J Ophthalmol 95:171–177, 2011 Venhuizen FG, Van Ginneken B, Liefers B et al.: Optical coherence tomography; (100.4996) Pattern recognition, neural networks; (100.2960) Image analysis; (170.4470) Clinical applications; (170.4470) Ophthalmology. J Ophthalmol 95:171–177, 2011
32.
go back to reference Stember JN, Chang P, Stember DM et al.: Convolutional neural networks for the detection and measurement of cerebral aneurysms on magnetic resonance angiography. J Digit Imaging:1–8, 2018 Stember JN, Chang P, Stember DM et al.: Convolutional neural networks for the detection and measurement of cerebral aneurysms on magnetic resonance angiography. J Digit Imaging:1–8, 2018
33.
go back to reference Schuirmann DJ: A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. J Pharmacokinet Biopharm 15(6):657–680, 1987CrossRefPubMed Schuirmann DJ: A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. J Pharmacokinet Biopharm 15(6):657–680, 1987CrossRefPubMed
34.
35.
go back to reference Dodge S, Karam L: Understanding how image quality affects deep neural networks. arXiv:1604.04004v2 [cs.CV] Dodge S, Karam L: Understanding how image quality affects deep neural networks. arXiv:1604.04004v2 [cs.CV]
36.
go back to reference Paranhos Da Costa GB, Contato WA, Nazare TS, Neto JESB, Ponti M: An empirical study on the effects of different types of noise in image classification tasks. arXiv:1609.02781v1 [cs.CV] Paranhos Da Costa GB, Contato WA, Nazare TS, Neto JESB, Ponti M: An empirical study on the effects of different types of noise in image classification tasks. arXiv:1609.02781v1 [cs.CV]
Metadata
Title
Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks
Authors
J. N. Stember
H. Celik
E. Krupinski
P. D. Chang
S. Mutasa
B. J. Wood
A. Lignelli
G. Moonis
L. H. Schwartz
S. Jambawalikar
U. Bagci
Publication date
01-08-2019
Publisher
Springer International Publishing
Published in
Journal of Imaging Informatics in Medicine / Issue 4/2019
Print ISSN: 2948-2925
Electronic ISSN: 2948-2933
DOI
https://doi.org/10.1007/s10278-019-00220-4

Other articles of this Issue 4/2019

Journal of Digital Imaging 4/2019 Go to the issue