Skip to main content
Top
Published in: European Radiology 6/2021

Open Access 01-06-2021 | Artificial Intelligence | Imaging Informatics and Artificial Intelligence

To buy or not to buy—evaluating commercial AI solutions in radiology (the ECLAIR guidelines)

Authors: Patrick Omoumi, Alexis Ducarouge, Antoine Tournier, Hugh Harvey, Charles E. Kahn Jr, Fanny Louvet-de Verchère, Daniel Pinto Dos Santos, Tobias Kober, Jonas Richiardi

Published in: European Radiology | Issue 6/2021

Login to get access

Abstract

Artificial intelligence (AI) has made impressive progress over the past few years, including many applications in medical imaging. Numerous commercial solutions based on AI techniques are now available for sale, forcing radiology practices to learn how to properly assess these tools. While several guidelines describing good practices for conducting and reporting AI-based research in medicine and radiology have been published, fewer efforts have focused on recommendations addressing the key questions to consider when critically assessing AI solutions before purchase. Commercial AI solutions are typically complicated software products, for the evaluation of which many factors are to be considered. In this work, authors from academia and industry have joined efforts to propose a practical framework that will help stakeholders evaluate commercial AI solutions in radiology (the ECLAIR guidelines) and reach an informed decision. Topics to consider in the evaluation include the relevance of the solution from the point of view of each stakeholder, issues regarding performance and validation, usability and integration, regulatory and legal aspects, and financial and support services.

Key Points

• Numerous commercial solutions based on artificial intelligence techniques are now available for sale, and radiology practices have to learn how to properly assess these tools.
• We propose a framework focusing on practical points to consider when assessing an AI solution in medical imaging, allowing all stakeholders to conduct relevant discussions with manufacturers and reach an informed decision as to whether to purchase an AI commercial solution for imaging applications.
• Topics to consider in the evaluation include the relevance of the solution from the point of view of each stakeholder, issues regarding performance and validation, usability and integration, regulatory and legal aspects, and financial and support services.
Appendix
Available only for authorised users
Literature
1.
go back to reference Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK, SPIRIT-AI ACONSORT-AIWG (2020) Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med 26:1364–1374CrossRef Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK, SPIRIT-AI ACONSORT-AIWG (2020) Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med 26:1364–1374CrossRef
2.
go back to reference Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ, Group SPIRIT-AIACONSORT-AIW (2020) Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med 26:1351–1363CrossRef Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ, Group SPIRIT-AIACONSORT-AIW (2020) Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med 26:1351–1363CrossRef
3.
go back to reference Moons KG, de Groot JA, Bouwmeester W et al (2014) Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist. PLoS Med 11:e1001744 Moons KG, de Groot JA, Bouwmeester W et al (2014) Critical appraisal and data extraction for systematic reviews of prediction modelling studies: the CHARMS checklist. PLoS Med 11:e1001744
4.
go back to reference Sengupta PP, Shrestha S, Berthon B et al (2020) Proposed Requirements for Cardiovascular Imaging-Related Machine Learning Evaluation (PRIME): a checklist: Reviewed by the American College of Cardiology Healthcare Innovation Council. JACC Cardiovasc Imaging 13:2017–2035 Sengupta PP, Shrestha S, Berthon B et al (2020) Proposed Requirements for Cardiovascular Imaging-Related Machine Learning Evaluation (PRIME): a checklist: Reviewed by the American College of Cardiology Healthcare Innovation Council. JACC Cardiovasc Imaging 13:2017–2035
5.
go back to reference Mongan J, Moy L, Kahn CE (2020) Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a guide for authors and reviewers. Radiology: Artificial Intelligencec 2:e200029 Mongan J, Moy L, Kahn CE (2020) Checklist for Artificial Intelligence in Medical Imaging (CLAIM): a guide for authors and reviewers. Radiology: Artificial Intelligencec 2:e200029
6.
go back to reference Faes L, Liu X, Wagner SK et al (2020) A clinician’s guide to artificial intelligence: how to critically appraise machine learning studies. Transl Vis Sci Technol 9:7 Faes L, Liu X, Wagner SK et al (2020) A clinician’s guide to artificial intelligence: how to critically appraise machine learning studies. Transl Vis Sci Technol 9:7
8.
go back to reference Filice RW, Mongan J, Kohli MD (2020) Evaluating artificial intelligence systems to guide purchasing decisions. J Am Coll Radiol in press Filice RW, Mongan J, Kohli MD (2020) Evaluating artificial intelligence systems to guide purchasing decisions. J Am Coll Radiol in press
9.
go back to reference Tadavarthi YVB, Krupinski E, Prater A, Gichoya JW, Safdar N, Trivedi H (2020) The state of radiology AI – considerations for purchase decisions and current market offerings. Radiology: Artificial Intelligence 2:e200004 Tadavarthi YVB, Krupinski E, Prater A, Gichoya JW, Safdar N, Trivedi H (2020) The state of radiology AI – considerations for purchase decisions and current market offerings. Radiology: Artificial Intelligence 2:e200004
10.
go back to reference European Society of Radiology (ESR) (2019) What the radiologist should know about artificial intelligence - an ESR white paper. Insights Imaging 10:44 European Society of Radiology (ESR) (2019) What the radiologist should know about artificial intelligence - an ESR white paper. Insights Imaging 10:44
11.
go back to reference dos Santos DP, Baeßler B (2018) Big data, artificial intelligence, and structured reporting. Eur Radiol Exp 2(1) dos Santos DP, Baeßler B (2018) Big data, artificial intelligence, and structured reporting. Eur Radiol Exp 2(1)
12.
go back to reference Savadjiev P, Chong J, Dohan A et al (2019) Demystification of AI-driven medical image interpretation: past, present and future. Eur Radiol 29(3):1616–1624CrossRef Savadjiev P, Chong J, Dohan A et al (2019) Demystification of AI-driven medical image interpretation: past, present and future. Eur Radiol 29(3):1616–1624CrossRef
13.
go back to reference Wong SH, Al-Hasani H, Alam Z, Alam A (2019) Artificial intelligence in radiology: how will we be affected? Eur Radiol 29(1):141–143CrossRef Wong SH, Al-Hasani H, Alam Z, Alam A (2019) Artificial intelligence in radiology: how will we be affected? Eur Radiol 29(1):141–143CrossRef
14.
go back to reference Hirschmann A, Cyriac J, Stieltjes B, Kober T, Richiardi J, Omoumi P (2019) Artificial intelligence in musculoskeletal imaging: review of current literature, challenges, and trends. Semin Musculoskelet Radiol 23:304–311CrossRef Hirschmann A, Cyriac J, Stieltjes B, Kober T, Richiardi J, Omoumi P (2019) Artificial intelligence in musculoskeletal imaging: review of current literature, challenges, and trends. Semin Musculoskelet Radiol 23:304–311CrossRef
15.
go back to reference Bach Cuadra M, Favre J, Omoumi P (2020) Quantification in musculoskeletal imaging using computational analysis and machine learning: segmentation and radiomics. Semin Musculoskelet Radiol 24:50–64CrossRef Bach Cuadra M, Favre J, Omoumi P (2020) Quantification in musculoskeletal imaging using computational analysis and machine learning: segmentation and radiomics. Semin Musculoskelet Radiol 24:50–64CrossRef
16.
go back to reference Visser JJ, Goergen SK, Klein S et al (2020) The value of quantitative musculoskeletal imaging. Semin Musculoskelet Radiol 24:460–474 Visser JJ, Goergen SK, Klein S et al (2020) The value of quantitative musculoskeletal imaging. Semin Musculoskelet Radiol 24:460–474
17.
go back to reference Sardanelli F, Hunink MG, Gilbert FJ, Di Leo G, Krestin GP (2010) Evidence-based radiology: why and how. Eur Radiol 20:1–15CrossRef Sardanelli F, Hunink MG, Gilbert FJ, Di Leo G, Krestin GP (2010) Evidence-based radiology: why and how. Eur Radiol 20:1–15CrossRef
18.
go back to reference Lindsay R, McKinstry S, Vallely S, Thornbury G (2011) What influences clinician’s satisfaction with radiology services? Insights Imaging 2:425–430CrossRef Lindsay R, McKinstry S, Vallely S, Thornbury G (2011) What influences clinician’s satisfaction with radiology services? Insights Imaging 2:425–430CrossRef
19.
go back to reference Pahade J, Couto C, Davis RB, Patel P, Siewert B, Rosen MP (2012) Reviewing imaging examination results with a radiologist immediately after study completion: patient preferences and assessment of feasibility in an academic department. AJR Am J Roentgenol 199:844–851 Pahade J, Couto C, Davis RB, Patel P, Siewert B, Rosen MP (2012) Reviewing imaging examination results with a radiologist immediately after study completion: patient preferences and assessment of feasibility in an academic department. AJR Am J Roentgenol 199:844–851
20.
go back to reference Bossuyt PM, Reitsma JB, Bruns DE et al (2015) STARD 2015: An updated list of essential items for reporting diagnostic accuracy studies. Radiology 277(3):826–832 Bossuyt PM, Reitsma JB, Bruns DE et al (2015) STARD 2015: An updated list of essential items for reporting diagnostic accuracy studies. Radiology 277(3):826–832
21.
go back to reference Collins GS, Reitsma JB, Altman DG, Moons KG (2015) Transparent reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement. Ann Intern Med 162:55–63CrossRef Collins GS, Reitsma JB, Altman DG, Moons KG (2015) Transparent reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement. Ann Intern Med 162:55–63CrossRef
22.
go back to reference Hutson M (2018) Artificial intelligence faces reproducibility crisis. Science 359:725–726CrossRef Hutson M (2018) Artificial intelligence faces reproducibility crisis. Science 359:725–726CrossRef
23.
go back to reference Bansal N, Agarwal C, Nguyen A (2020) SAM: the sensitivity of attribution methods to hyperparameters. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 8673–8683 Bansal N, Agarwal C, Nguyen A (2020) SAM: the sensitivity of attribution methods to hyperparameters. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 8673–8683
24.
go back to reference Collins GS, Moons KGM (2019) Reporting of artificial intelligence prediction models. Lancet 393:1577–1579CrossRef Collins GS, Moons KGM (2019) Reporting of artificial intelligence prediction models. Lancet 393:1577–1579CrossRef
25.
go back to reference Wang X, Liang G, Zhang Y, Blanton H, Bessinger Z, Jacobs N (2020) Inconsistent performance of deep learning models on mammogram classification. J Am Coll Radiol 17:796–803CrossRef Wang X, Liang G, Zhang Y, Blanton H, Bessinger Z, Jacobs N (2020) Inconsistent performance of deep learning models on mammogram classification. J Am Coll Radiol 17:796–803CrossRef
27.
go back to reference Hyndman RJ, Koehler AB (2006) Another look at measures of forecast accuracy. Int J Forecast 22:679–688CrossRef Hyndman RJ, Koehler AB (2006) Another look at measures of forecast accuracy. Int J Forecast 22:679–688CrossRef
28.
go back to reference Kaushal A, Altman R, Langlotz C (2020) Geographic distribution of US cohorts used to train deep learning algorithms. JAMA 324:1212–1213CrossRef Kaushal A, Altman R, Langlotz C (2020) Geographic distribution of US cohorts used to train deep learning algorithms. JAMA 324:1212–1213CrossRef
31.
go back to reference Reyes M, Meier R, Pereir S et al (2020) On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiology: Artificial Intelligence 2(3):e190043 Reyes M, Meier R, Pereir S et al (2020) On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiology: Artificial Intelligence 2(3):e190043
32.
go back to reference Pesapane F, Volonté C, Codari M, Sardanelli F (2018) Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States. Insights Imaging 9:745–753CrossRef Pesapane F, Volonté C, Codari M, Sardanelli F (2018) Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States. Insights Imaging 9:745–753CrossRef
36.
go back to reference Kohli A, Mahajan V, Seals K, Kohli A, Jha S (2019) Concepts in U.S. Food and Drug Administration regulation of artificial intelligence for medical imaging. AJR Am J Roentgenol 213:886–888 Kohli A, Mahajan V, Seals K, Kohli A, Jha S (2019) Concepts in U.S. Food and Drug Administration regulation of artificial intelligence for medical imaging. AJR Am J Roentgenol 213:886–888
37.
38.
go back to reference U.S. Food & Drug Administration (2017) De novo classification process (evaluation of automatic class III designation) - Guidance for Industry and Food and Drug Administration staff FDA. Available via: https://www.fda.gov/media/72674. Accessed 5 Oct 2020 U.S. Food & Drug Administration (2017) De novo classification process (evaluation of automatic class III designation) - Guidance for Industry and Food and Drug Administration staff FDA. Available via: https://​www.​fda.​gov/​media/​72674. Accessed 5 Oct 2020
39.
go back to reference U.S. Food & Drug Administration (2014) The 510(k) program: evaluating substantial equivalence in premarket notifications [510(k)] - Guidance for Industry and Food and Drug Administration Staff FDA. Available via: https://www.fda.gov/media/82395. Accessed 5 Oct 2020 U.S. Food & Drug Administration (2014) The 510(k) program: evaluating substantial equivalence in premarket notifications [510(k)] - Guidance for Industry and Food and Drug Administration Staff FDA. Available via: https://​www.​fda.​gov/​media/​82395. Accessed 5 Oct 2020
Metadata
Title
To buy or not to buy—evaluating commercial AI solutions in radiology (the ECLAIR guidelines)
Authors
Patrick Omoumi
Alexis Ducarouge
Antoine Tournier
Hugh Harvey
Charles E. Kahn Jr
Fanny Louvet-de Verchère
Daniel Pinto Dos Santos
Tobias Kober
Jonas Richiardi
Publication date
01-06-2021
Publisher
Springer Berlin Heidelberg
Published in
European Radiology / Issue 6/2021
Print ISSN: 0938-7994
Electronic ISSN: 1432-1084
DOI
https://doi.org/10.1007/s00330-020-07684-x

Other articles of this Issue 6/2021

European Radiology 6/2021 Go to the issue