Skip to main content
Top
Published in: Journal of Digital Imaging 4/2019

Open Access 01-08-2019

RIL-Contour: a Medical Imaging Dataset Annotation Tool for and with Deep Learning

Authors: Kenneth A. Philbrick, Alexander D. Weston, Zeynettin Akkus, Timothy L. Kline, Panagiotis Korfiatis, Tomas Sakinis, Petro Kostandy, Arunnit Boonrod, Atefeh Zeinoddini, Naoki Takahashi, Bradley J. Erickson

Published in: Journal of Imaging Informatics in Medicine | Issue 4/2019

Login to get access

Abstract

Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to “learn” from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.
Literature
1.
2.
go back to reference Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L: ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252, 2015CrossRef Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L: ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252, 2015CrossRef
3.
go back to reference Weston AD, et al: Automated abdominal segmentation of CT scans for body composition analysis using deep learning. Radiology 181432, 2018 Weston AD, et al: Automated abdominal segmentation of CT scans for body composition analysis using deep learning. Radiology 181432, 2018
4.
go back to reference Philbrick KA, Yoshida K, Inoue D, Akkus Z, Kline TL, Weston AD, Korfiatis P, Takahashi N, Erickson BJ: What does deep learning see? Insights from a classifier trained to predict contrast enhancement phase from CT images. Am J Roentgenol 211(6):1184–1193, 2018CrossRef Philbrick KA, Yoshida K, Inoue D, Akkus Z, Kline TL, Weston AD, Korfiatis P, Takahashi N, Erickson BJ: What does deep learning see? Insights from a classifier trained to predict contrast enhancement phase from CT images. Am J Roentgenol 211(6):1184–1193, 2018CrossRef
5.
go back to reference Korfiatis P, Kline TL, Lachance DH, Parney IF, Buckner JC, Erickson BJ: Residual deep convolutional neural network predicts MGMT methylation status. J Digit Imaging 30:622–628, 2017CrossRefPubMedPubMedCentral Korfiatis P, Kline TL, Lachance DH, Parney IF, Buckner JC, Erickson BJ: Residual deep convolutional neural network predicts MGMT methylation status. J Digit Imaging 30:622–628, 2017CrossRefPubMedPubMedCentral
6.
go back to reference Akkus Z, Ali I, Sedlář J: Predicting deletion of chromosomal arms 1p/19q in low-grade gliomas from MR images using machine intelligence. J Digit Imaging 30:469–476, 2017CrossRefPubMedPubMedCentral Akkus Z, Ali I, Sedlář J: Predicting deletion of chromosomal arms 1p/19q in low-grade gliomas from MR images using machine intelligence. J Digit Imaging 30:469–476, 2017CrossRefPubMedPubMedCentral
7.
go back to reference Rajpurkar P, et al: Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017 Rajpurkar P, et al: Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017
9.
go back to reference Kikinis R, Pieper SD, Vosburgh KG: 3D Slicer: A platform for subject-specific image analysis, visualization, and clinical support. In: Jolesz FA Ed.. Intraoperative Imaging and Image-Guided Therapy. New York: Springer New York, 2014, pp. 277–289CrossRef Kikinis R, Pieper SD, Vosburgh KG: 3D Slicer: A platform for subject-specific image analysis, visualization, and clinical support. In: Jolesz FA Ed.. Intraoperative Imaging and Image-Guided Therapy. New York: Springer New York, 2014, pp. 277–289CrossRef
10.
go back to reference Kline TL, Edwards ME, Korfiatis P, Akkus Z, Torres VE, Erickson BJ: Semiautomated segmentation of polycystic kidneys in T2-weighted MR images. Am J Roentgenol 207(3):605–613, 2016CrossRef Kline TL, Edwards ME, Korfiatis P, Akkus Z, Torres VE, Erickson BJ: Semiautomated segmentation of polycystic kidneys in T2-weighted MR images. Am J Roentgenol 207(3):605–613, 2016CrossRef
11.
go back to reference Rubin DL, Willrett D, O’Connor MJ, Hage C, Kurtz C, Moreira DA: Automated tracking of quantitative assessments of tumor burden in clinical trials. Transl Oncol 7(1):23–35, 2014CrossRefPubMedPubMedCentral Rubin DL, Willrett D, O’Connor MJ, Hage C, Kurtz C, Moreira DA: Automated tracking of quantitative assessments of tumor burden in clinical trials. Transl Oncol 7(1):23–35, 2014CrossRefPubMedPubMedCentral
12.
go back to reference Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G: User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. NeuroImage 31(3):1116–1128, 2006CrossRefPubMed Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G: User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. NeuroImage 31(3):1116–1128, 2006CrossRefPubMed
14.
go back to reference Papademetris X et al.: BioImage Suite: An integrated medical image analysis suite: An update. Insight J 2006:209–209, 2006PubMedPubMedCentral Papademetris X et al.: BioImage Suite: An integrated medical image analysis suite: An update. Insight J 2006:209–209, 2006PubMedPubMedCentral
15.
go back to reference Jiang H, van Zijl PCM, Kim J, Pearlson GD, Mori S: DtiStudio: Resource program for diffusion tensor computation and fiber bundle tracking. Comput Methods Prog Biomed 81(2):106–116, 2006CrossRef Jiang H, van Zijl PCM, Kim J, Pearlson GD, Mori S: DtiStudio: Resource program for diffusion tensor computation and fiber bundle tracking. Comput Methods Prog Biomed 81(2):106–116, 2006CrossRef
16.
go back to reference McAuliffe MJ, et al: Medical image processing, analysis and visualization in clinical research. In: Proceedings 14th IEEE Symposium on Computer-Based Medical Systems. CBMS, 2001 McAuliffe MJ, et al: Medical image processing, analysis and visualization in clinical research. In: Proceedings 14th IEEE Symposium on Computer-Based Medical Systems. CBMS, 2001
17.
go back to reference Jenkinson M, Beckmann CF, Behrens TEJ, Woolrich MW, Smith SM: FSL. NeuroImage 62(2):782–790, 2012CrossRefPubMed Jenkinson M, Beckmann CF, Behrens TEJ, Woolrich MW, Smith SM: FSL. NeuroImage 62(2):782–790, 2012CrossRefPubMed
18.
go back to reference Takahashi N, Sugimoto M, Psutka SP, Chen B, Moynagh MR, Carter RE: Validation study of a new semi-automated software program for CT body composition analysis. Abdom Radiol 42(9):2369–2375, 2017CrossRef Takahashi N, Sugimoto M, Psutka SP, Chen B, Moynagh MR, Carter RE: Validation study of a new semi-automated software program for CT body composition analysis. Abdom Radiol 42(9):2369–2375, 2017CrossRef
19.
go back to reference Carvalho LE, Sobieranski AC, von Wangenheim A: 3D segmentation algorithms for computerized tomographic imaging: A systematic literature review. J Digit Imaging 1–52, 2018 Carvalho LE, Sobieranski AC, von Wangenheim A: 3D segmentation algorithms for computerized tomographic imaging: A systematic literature review. J Digit Imaging 1–52, 2018
20.
go back to reference Cheng J-Z, Ni D, Chou YH, Qin J, Tiu CM, Chang YC, Huang CS, Shen D, Chen CM: Computer-aided diagnosis with deep learning architecture: Applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep 6:24454, 2016CrossRefPubMedPubMedCentral Cheng J-Z, Ni D, Chou YH, Qin J, Tiu CM, Chang YC, Huang CS, Shen D, Chen CM: Computer-aided diagnosis with deep learning architecture: Applications to breast lesions in US images and pulmonary nodules in CT scans. Sci Rep 6:24454, 2016CrossRefPubMedPubMedCentral
21.
go back to reference Wachinger C, Reuter M, Klein T: DeepNAT: Deep convolutional neural network for segmenting neuroanatomy. NeuroImage 170:434–445, 2018CrossRefPubMed Wachinger C, Reuter M, Klein T: DeepNAT: Deep convolutional neural network for segmenting neuroanatomy. NeuroImage 170:434–445, 2018CrossRefPubMed
22.
go back to reference Wang KC: Standard lexicons, coding systems and ontologies for interoperability and semantic computation in imaging. J Digit Imaging 31(3):353–360, 2018CrossRefPubMedPubMedCentral Wang KC: Standard lexicons, coding systems and ontologies for interoperability and semantic computation in imaging. J Digit Imaging 31(3):353–360, 2018CrossRefPubMedPubMedCentral
23.
go back to reference Agarwal V, Podchiyska T, Banda JM, Goel V, Leung TI, Minty EP, Sweeney TE, Gyang E, Shah NH: Learning statistical models of phenotypes using noisy labeled training data. J Am Med Inform Assoc 23(6):1166–1173, 2016CrossRefPubMedPubMedCentral Agarwal V, Podchiyska T, Banda JM, Goel V, Leung TI, Minty EP, Sweeney TE, Gyang E, Shah NH: Learning statistical models of phenotypes using noisy labeled training data. J Am Med Inform Assoc 23(6):1166–1173, 2016CrossRefPubMedPubMedCentral
24.
go back to reference Korfiatis PD, Kline TL, Blezek DJ, Langer SG, Ryan WJ, Erickson BJ: MIRMAID: A content management system for medical image analysis research. RadioGraphics 35(5):1461–1468, 2015CrossRefPubMedPubMedCentral Korfiatis PD, Kline TL, Blezek DJ, Langer SG, Ryan WJ, Erickson BJ: MIRMAID: A content management system for medical image analysis research. RadioGraphics 35(5):1461–1468, 2015CrossRefPubMedPubMedCentral
25.
go back to reference Marcus DS, Olsen TR, Ramaratnam M, Buckner RL: The extensible neuroimaging archive toolkit. Neuroinformatics 5(1):11–33, 2007CrossRefPubMed Marcus DS, Olsen TR, Ramaratnam M, Buckner RL: The extensible neuroimaging archive toolkit. Neuroinformatics 5(1):11–33, 2007CrossRefPubMed
26.
go back to reference Kline TL, Korfiatis P, Edwards ME, Bae KT, Yu A, Chapman AB, Mrug M, Grantham JJ, Landsittel D, Bennett WM, King BF, Harris PC, Torres VE, Erickson BJ, CRISP Investigators: Image texture features predict renal function decline in patients with autosomal dominant polycystic kidney disease. Kidney Int 92(5):1206–1216, 2017CrossRefPubMedPubMedCentral Kline TL, Korfiatis P, Edwards ME, Bae KT, Yu A, Chapman AB, Mrug M, Grantham JJ, Landsittel D, Bennett WM, King BF, Harris PC, Torres VE, Erickson BJ, CRISP Investigators: Image texture features predict renal function decline in patients with autosomal dominant polycystic kidney disease. Kidney Int 92(5):1206–1216, 2017CrossRefPubMedPubMedCentral
27.
go back to reference Selvaraju RR, et al.: Grad-CAM: Why did you say that? arXiv [stat.ML], 2016 Selvaraju RR, et al.: Grad-CAM: Why did you say that? arXiv [stat.ML], 2016
29.
go back to reference Simonyan K, Vedaldi A, Zisserman A: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv [cs.CV], 2013 Simonyan K, Vedaldi A, Zisserman A: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv [cs.CV], 2013
30.
go back to reference Zhou B, et al: Learning Deep Features for Discriminative Localization. arXiv [cs.CV], 2015 Zhou B, et al: Learning Deep Features for Discriminative Localization. arXiv [cs.CV], 2015
31.
go back to reference Mehrtash A, et al: DeepInfer: Open-source deep learning deployment toolkit for image-guided therapy. In: Proceedings of SPIE--the International Society for Optical Engineering, 10135. 101351K, 2017 Mehrtash A, et al: DeepInfer: Open-source deep learning deployment toolkit for image-guided therapy. In: Proceedings of SPIE--the International Society for Optical Engineering, 10135. 101351K, 2017
32.
go back to reference Yu F, et al: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015 Yu F, et al: Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015
33.
go back to reference Zhou Z, et al: Integrating active learning and transfer learning for carotid intima-media thickness video interpretation. 2018 Zhou Z, et al: Integrating active learning and transfer learning for carotid intima-media thickness video interpretation. 2018
34.
go back to reference Russakovsky O, Li L, Fei-Fei L: Best of both worlds: Human-machine collaboration for object annotation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015 Russakovsky O, Li L, Fei-Fei L: Best of both worlds: Human-machine collaboration for object annotation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015
35.
go back to reference Sakinis T, et al: Interactive segmentation of medical images through fully convolutional neural networks. arXiv preprint arXiv:1903.08205, 2019. Sakinis T, et al: Interactive segmentation of medical images through fully convolutional neural networks. arXiv preprint arXiv:1903.08205, 2019.
36.
go back to reference Wu J, et al: Deep multiple instance learning for image classification and auto-annotation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015 Wu J, et al: Deep multiple instance learning for image classification and auto-annotation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015
37.
go back to reference Xu Y, et al: Deep learning of feature representation with multiple instance learning for medical image analysis. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2014 Xu Y, et al: Deep learning of feature representation with multiple instance learning for medical image analysis. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2014
38.
go back to reference Mettes P, Snoek CG, Chang S-F: Localizing actions from video labels and pseudo-annotations. arXiv preprint arXiv:1707.09143, 2017 Mettes P, Snoek CG, Chang S-F: Localizing actions from video labels and pseudo-annotations. arXiv preprint arXiv:1707.09143, 2017
39.
go back to reference Li X, Morgan PS, Ashburner J, Smith J, Rorden C: The first step for neuroimaging data analysis: DICOM to NIfTI conversion. J Neurosci Methods 264:47–56, 2016CrossRefPubMed Li X, Morgan PS, Ashburner J, Smith J, Rorden C: The first step for neuroimaging data analysis: DICOM to NIfTI conversion. J Neurosci Methods 264:47–56, 2016CrossRefPubMed
Metadata
Title
RIL-Contour: a Medical Imaging Dataset Annotation Tool for and with Deep Learning
Authors
Kenneth A. Philbrick
Alexander D. Weston
Zeynettin Akkus
Timothy L. Kline
Panagiotis Korfiatis
Tomas Sakinis
Petro Kostandy
Arunnit Boonrod
Atefeh Zeinoddini
Naoki Takahashi
Bradley J. Erickson
Publication date
01-08-2019
Publisher
Springer International Publishing
Published in
Journal of Imaging Informatics in Medicine / Issue 4/2019
Print ISSN: 2948-2925
Electronic ISSN: 2948-2933
DOI
https://doi.org/10.1007/s10278-019-00232-0

Other articles of this Issue 4/2019

Journal of Digital Imaging 4/2019 Go to the issue