Skip to main content
Top
Published in: Journal of Medical Systems 5/2021

01-05-2021 | Artificial Intelligence | Systems-Level Quality Improvement

An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients

Authors: Junfeng Peng, Kaiqiang Zou, Mi Zhou, Yi Teng, Xiongyong Zhu, Feifei Zhang, Jun Xu

Published in: Journal of Medical Systems | Issue 5/2021

Login to get access

Abstract

In recent years, artificial intelligence-based computer aided diagnosis (CAD) system for the hepatitis has made great progress. Especially, the complex models such as deep learning achieve better performance than the simple ones due to the nonlinear hypotheses of the real world clinical data. However,complex model as a black box, which ignores why it make a certain decision, causes the model distrust from clinicians. To solve these issues, an explainable artificial intelligence (XAI) framework is proposed in this paper to give the global and local interpretation of auxiliary diagnosis of hepatitis while retaining the good prediction performance. First, a public hepatitis classification benchmark from UCI is used to test the feasibility of the framework. Then, the transparent and black-box machine learning models are both employed to forecast the hepatitis deterioration. The transparent models such as logistic regression (LR), decision tree (DT)and k-nearest neighbor (KNN) are picked. While the black-box model such as the eXtreme Gradient Boosting (XGBoost), support vector machine (SVM), random forests (RF) are selected. Finally, the SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME) and Partial Dependence Plots (PDP) are utilized to improve the model interpretation of liver disease. The experimental results show that the complex models outperform the simple ones. The developed RF achieves the highest accuracy (91.9%) among all the models. The proposed framework combining the global and local interpretable methods improves the transparency of complex models, and gets insight into the judgments from the complex models, thereby guiding the treatment strategy and improving the prognosis of hepatitis patients. In addition, the proposed framework could also assist the clinical data scientists to design a more appropriate structure of CAD.
Literature
1.
go back to reference Pratt D. S., Kaplan M. M.: Evaluation of liver function. Harrisons Principles of Internal Medicine New York: McGraw-Hill, 2002, pp 1711–1715 Pratt D. S., Kaplan M. M.: Evaluation of liver function. Harrisons Principles of Internal Medicine New York: McGraw-Hill, 2002, pp 1711–1715
4.
5.
go back to reference Longo D., Fauci A., Kasper D., Hauser S., Jameson J., Loscalzo J.: Harrisons manual of medicine New York City: McGraw Hill Professional, 2019 Longo D., Fauci A., Kasper D., Hauser S., Jameson J., Loscalzo J.: Harrisons manual of medicine New York City: McGraw Hill Professional, 2019
6.
go back to reference Lee W. M., Hepatitis B.: Virus infection. New Engl. J. Med. 337 (24): 1733–1745, 1997CrossRef Lee W. M., Hepatitis B.: Virus infection. New Engl. J. Med. 337 (24): 1733–1745, 1997CrossRef
9.
go back to reference Cholongitas E., Marelli L., Shusang V., Senzolo M., Rolles K., Patch D., Burroughs A. K.: A systematic review of the performance of the model for end-stage liver disease (MELD) in the setting of liver transplantation. Liver Transplant. 12 (7): 1049–1061, 2006CrossRef Cholongitas E., Marelli L., Shusang V., Senzolo M., Rolles K., Patch D., Burroughs A. K.: A systematic review of the performance of the model for end-stage liver disease (MELD) in the setting of liver transplantation. Liver Transplant. 12 (7): 1049–1061, 2006CrossRef
10.
go back to reference Luca A., Angermayr B., Bertolini G., Koenig F., Vizzini G., Ploner M., Peck Radosavljevic M., Gridelli B., Bosch J.: An integrated MELD model including serum sodium and age improves the prediction of early mortality in patients with cirrhosis. Liver Transplant. 13 (8): 1174–1180, 2007CrossRef Luca A., Angermayr B., Bertolini G., Koenig F., Vizzini G., Ploner M., Peck Radosavljevic M., Gridelli B., Bosch J.: An integrated MELD model including serum sodium and age improves the prediction of early mortality in patients with cirrhosis. Liver Transplant. 13 (8): 1174–1180, 2007CrossRef
11.
go back to reference Lukáová A., Babi B., Paraliová Z., Parali J.: How to increase the effectiveness of the hepatitis diagnostics by means of appropriate machine learning methods. Information Technology in Bio- and Medical Informatics Berlin: Springer International, 2015 Lukáová A., Babi B., Paraliová Z., Parali J.: How to increase the effectiveness of the hepatitis diagnostics by means of appropriate machine learning methods. Information Technology in Bio- and Medical Informatics Berlin: Springer International, 2015
12.
go back to reference Chen Y., Luo Y., Huang W., et al.: Machine-learning-based classification of real-time tissue elastography for hepatic fibrosis in patients with chronic hepatitis B. Comput. Biol. Med. 89: 18–23, 2017CrossRef Chen Y., Luo Y., Huang W., et al.: Machine-learning-based classification of real-time tissue elastography for hepatic fibrosis in patients with chronic hepatitis B. Comput. Biol. Med. 89: 18–23, 2017CrossRef
13.
go back to reference Hashem S., Esmat G., Elakel W., et al.: Comparison of machine learning approaches for prediction of advanced liver fibrosis in chronic hepatitis c patients. IEEE/ACM Trans. Comput. Biol. Bioinform. 15 (3): 861–868, 2018CrossRef Hashem S., Esmat G., Elakel W., et al.: Comparison of machine learning approaches for prediction of advanced liver fibrosis in chronic hepatitis c patients. IEEE/ACM Trans. Comput. Biol. Bioinform. 15 (3): 861–868, 2018CrossRef
14.
go back to reference Tian X., Chong Y., Huang Y., et al.: Using machine learning algorithms to predict hepatitis b surface antigen seroclearance. Comput. Math. Methods Med. 2019: 1–7, 2019CrossRef Tian X., Chong Y., Huang Y., et al.: Using machine learning algorithms to predict hepatitis b surface antigen seroclearance. Comput. Math. Methods Med. 2019: 1–7, 2019CrossRef
15.
go back to reference Singh A., Mehta J. C., Anand D., et al. (2020) An intelligent hybrid approach for hepatitis disease diagnosis: Combining enhanced k?means clustering and improved ensemble learninge. Expert Syst., e12526 Singh A., Mehta J. C., Anand D., et al. (2020) An intelligent hybrid approach for hepatitis disease diagnosis: Combining enhanced k?means clustering and improved ensemble learninge. Expert Syst., e12526
17.
go back to reference Lundberg S. M., Nair B., Vavilala M. S., et al.: Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat. Biomed. Eng. 2 (10): 749–760, 2018CrossRef Lundberg S. M., Nair B., Vavilala M. S., et al.: Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat. Biomed. Eng. 2 (10): 749–760, 2018CrossRef
18.
go back to reference Lundberg S. M., Lee S. I., Vavilala M. S.: A unified approach to interpreting model predictionsy. Neural Inf. Process. Syst. 30: 4768–4777, 2017 Lundberg S. M., Lee S. I., Vavilala M. S.: A unified approach to interpreting model predictionsy. Neural Inf. Process. Syst. 30: 4768–4777, 2017
19.
go back to reference Friedman J. H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29 (5): 1189–1232, 2001CrossRef Friedman J. H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29 (5): 1189–1232, 2001CrossRef
20.
go back to reference Ribeiro M. T., Singh S., Guestrin C.: Why should i trust you?: Explaining the predictions of any classifier.. In: North American Chapter of the Association for Computational Linguistics., 2016, pp 97–101 Ribeiro M. T., Singh S., Guestrin C.: Why should i trust you?: Explaining the predictions of any classifier.. In: North American Chapter of the Association for Computational Linguistics., 2016, pp 97–101
22.
go back to reference Chawla N. V., Bowyer K. W., Hall L. O., Kegelmeyer W. P.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16 (1): 321–357, 2001 Chawla N. V., Bowyer K. W., Hall L. O., Kegelmeyer W. P.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16 (1): 321–357, 2001
23.
go back to reference Kim B., Rajiv K., Oluwasanmi O. K.: Examples are not enough, learn to criticize! criticism for interpretability. Neural Inf. Process. Syst. 29: 2280–2288, 2015 Kim B., Rajiv K., Oluwasanmi O. K.: Examples are not enough, learn to criticize! criticism for interpretability. Neural Inf. Process. Syst. 29: 2280–2288, 2015
24.
go back to reference Vapnik V., Chervonenkis A.: The necessary and sufficient conditions for consistency in the empirical risk minimization method. Pattern Recognit. Image Anal. 1 (3): 283–305 , 1991 Vapnik V., Chervonenkis A.: The necessary and sufficient conditions for consistency in the empirical risk minimization method. Pattern Recognit. Image Anal. 1 (3): 283–305 , 1991
25.
go back to reference Chen T. Q., Guestrin C. (2016) XGBoost: a scalable tree boosting system. Knowl. Discov. Data Mining,785–794 Chen T. Q., Guestrin C. (2016) XGBoost: a scalable tree boosting system. Knowl. Discov. Data Mining,785–794
26.
27.
go back to reference Ribeiro M. T., Sameer S., Carlos G.: Model-agnostic interpretability of machine learning ICML.. In: Workshop on Human Interpretability in Machine Learning, 2016 Ribeiro M. T., Sameer S., Carlos G.: Model-agnostic interpretability of machine learning ICML.. In: Workshop on Human Interpretability in Machine Learning, 2016
28.
go back to reference Du M., Liu N., Hu X.: Techniques for interpretable machine learning. Commun. ACM 63 (1): 68–77, 2016CrossRef Du M., Liu N., Hu X.: Techniques for interpretable machine learning. Commun. ACM 63 (1): 68–77, 2016CrossRef
29.
go back to reference Thomson W., Roth A. E.: The Shapley value: essays in honor of Lloyd S. Shapley. Economica 58 (229): 123, 1991CrossRef Thomson W., Roth A. E.: The Shapley value: essays in honor of Lloyd S. Shapley. Economica 58 (229): 123, 1991CrossRef
30.
go back to reference Štrumbelj E, Kononenko I., Hu X.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41 (3): 647–665, 2014CrossRef Štrumbelj E, Kononenko I., Hu X.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41 (3): 647–665, 2014CrossRef
Metadata
Title
An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients
Authors
Junfeng Peng
Kaiqiang Zou
Mi Zhou
Yi Teng
Xiongyong Zhu
Feifei Zhang
Jun Xu
Publication date
01-05-2021
Publisher
Springer US
Published in
Journal of Medical Systems / Issue 5/2021
Print ISSN: 0148-5598
Electronic ISSN: 1573-689X
DOI
https://doi.org/10.1007/s10916-021-01736-5

Other articles of this Issue 5/2021

Journal of Medical Systems 5/2021 Go to the issue