Skip to main content
Top
Published in: BMC Medical Informatics and Decision Making 1/2020

Open Access 01-12-2020 | Artificial Intelligence | Research article

A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare

Authors: Amie J. Barda, Christopher M. Horvat, Harry Hochheiser

Published in: BMC Medical Informatics and Decision Making | Issue 1/2020

Login to get access

Abstract

Background

There is an increasing interest in clinical prediction tools that can achieve high prediction accuracy and provide explanations of the factors leading to increased risk of adverse outcomes. However, approaches to explaining complex machine learning (ML) models are rarely informed by end-user needs and user evaluations of model interpretability are lacking in the healthcare domain. We used extended revisions of previously-published theoretical frameworks to propose a framework for the design of user-centered displays of explanations. This new framework served as the basis for qualitative inquiries and design review sessions with critical care nurses and physicians that informed the design of a user-centered explanation display for an ML-based prediction tool.

Methods

We used our framework to propose explanation displays for predictions from a pediatric intensive care unit (PICU) in-hospital mortality risk model. Proposed displays were based on a model-agnostic, instance-level explanation approach based on feature influence, as determined by Shapley values. Focus group sessions solicited critical care provider feedback on the proposed displays, which were then revised accordingly.

Results

The proposed displays were perceived as useful tools in assessing model predictions. However, specific explanation goals and information needs varied by clinical role and level of predictive modeling knowledge. Providers preferred explanation displays that required less information processing effort and could support the information needs of a variety of users. Providing supporting information to assist in interpretation was seen as critical for fostering provider understanding and acceptance of the predictions and explanations. The user-centered explanation display for the PICU in-hospital mortality risk model incorporated elements from the initial displays along with enhancements suggested by providers.

Conclusions

We proposed a framework for the design of user-centered displays of explanations for ML models. We used the proposed framework to motivate the design of a user-centered display of an explanation for predictions from a PICU in-hospital mortality risk model. Positive feedback from focus group participants provides preliminary support for the use of model-agnostic, instance-level explanations of feature influence as an approach to understand ML model predictions in healthcare and advances the discussion on how to effectively communicate ML model information to healthcare providers.
Appendix
Available only for authorised users
Literature
12.
14.
go back to reference Ras G, van Gerven M, Haselager P. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges. In: Escalante HJ, Escalera S, Guyon I, Baró X, Güçlütürk Y, Güçlü U, et al., editors. Explainable and Interpretable Models in Computer Vision and Machine Learning. Cham: Springer; 2018. p. 19–36. https://doi.org/10.1007/978-3-319-98131-4_2.CrossRef Ras G, van Gerven M, Haselager P. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges. In: Escalante HJ, Escalera S, Guyon I, Baró X, Güçlütürk Y, Güçlü U, et al., editors. Explainable and Interpretable Models in Computer Vision and Machine Learning. Cham: Springer; 2018. p. 19–36. https://​doi.​org/​10.​1007/​978-3-319-98131-4_​2.CrossRef
16.
go back to reference Lim BY, Yang Q, Abdul A, Wang D. Why these Explanations? Selecting Intelligibility Types for Explanation Goals. In: Joint Proceedings of the ACM IUI 2019 Workshops. Los Angeles, CA, USA; 2019. Lim BY, Yang Q, Abdul A, Wang D. Why these Explanations? Selecting Intelligibility Types for Explanation Goals. In: Joint Proceedings of the ACM IUI 2019 Workshops. Los Angeles, CA, USA; 2019.
17.
go back to reference Ribera M, Lapedriza A. Can we do better explanations? A proposal of User-Centered Explainable AI. In: Joint Proceedings of the ACM IUI 2019 Workshops. Los Angeles, CA, USA; 2019. Ribera M, Lapedriza A. Can we do better explanations? A proposal of User-Centered Explainable AI. In: Joint Proceedings of the ACM IUI 2019 Workshops. Los Angeles, CA, USA; 2019.
22.
go back to reference Holzinger A, Langs G, Denk H, Zatloukal K, Müller H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov. 2019;9:1–13.CrossRef Holzinger A, Langs G, Denk H, Zatloukal K, Müller H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov. 2019;9:1–13.CrossRef
24.
go back to reference Fayyad UM, Irani KB. Multi-lnterval Discretization of Continuous-Valued Attributes for Classification learning. In: 13th International Joint Conference on Artificial Intelligence. 1993. p. 1022–7. Fayyad UM, Irani KB. Multi-lnterval Discretization of Continuous-Valued Attributes for Classification learning. In: 13th International Joint Conference on Artificial Intelligence. 1993. p. 1022–7.
26.
28.
go back to reference Biran O, Cotton C. Explanation and Justification in Machine Learning : A Survey. In: IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI). Melbourne, Australia; 2017. Biran O, Cotton C. Explanation and Justification in Machine Learning : A Survey. In: IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI). Melbourne, Australia; 2017.
30.
go back to reference Allahyari H, Lavesson N. User-oriented assessment of classification model understandability. In: 11th Scandinavian Conference on Artificial Intelligence. Trondheim, Norway; 2011. Allahyari H, Lavesson N. User-oriented assessment of classification model understandability. In: 11th Scandinavian Conference on Artificial Intelligence. Trondheim, Norway; 2011.
32.
go back to reference Ribeiro MT, Singh S, Guestrin C. “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. San Francisco: ACM; 2016. p. 1135–44. http://arxiv.org/abs/1602.04938.CrossRef Ribeiro MT, Singh S, Guestrin C. “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. San Francisco: ACM; 2016. p. 1135–44. http://​arxiv.​org/​abs/​1602.​04938.CrossRef
36.
go back to reference Corbin J, Strauss A. Basics of qualitative research: techniques and procedures for developing grounded theory. 3rd ed. Los Angeles: SAGE Publications; 2008.CrossRef Corbin J, Strauss A. Basics of qualitative research: techniques and procedures for developing grounded theory. 3rd ed. Los Angeles: SAGE Publications; 2008.CrossRef
38.
go back to reference NVivo qualitative data analysis software. Version 12. QSR International Pty Ltd.; 2018. NVivo qualitative data analysis software. Version 12. QSR International Pty Ltd.; 2018.
Metadata
Title
A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare
Authors
Amie J. Barda
Christopher M. Horvat
Harry Hochheiser
Publication date
01-12-2020
Publisher
BioMed Central
Published in
BMC Medical Informatics and Decision Making / Issue 1/2020
Electronic ISSN: 1472-6947
DOI
https://doi.org/10.1186/s12911-020-01276-x

Other articles of this Issue 1/2020

BMC Medical Informatics and Decision Making 1/2020 Go to the issue