Skip to main content
Top
Published in:

13-11-2024 | Artificial Intelligence | Scientific Contribution

Why we should talk about institutional (dis)trustworthiness and medical machine learning

Authors: Michiel De Proost, Giorgia Pozzi

Published in: Medicine, Health Care and Philosophy | Issue 1/2025

Login to get access

Abstract

The principle of trust has been placed at the centre as an attitude for engaging with clinical machine learning systems. However, the notions of trust and distrust remain fiercely debated in the philosophical and ethical literature. In this article, we proceed on a structural level ex negativo as we aim to analyse the concept of “institutional distrustworthiness” to achieve a proper diagnosis of how we should not engage with medical machine learning. First, we begin with several examples that hint at the emergence of a climate of distrust in the context of medical machine learning. Second, we introduce the concept of institutional trustworthiness based on an expansion of Hawley’s commitment account. Third, we argue that institutional opacity can undermine the trustworthiness of medical institutions and can lead to new forms of testimonial injustices. Finally, we focus on possible building blocks for repairing institutional distrustworthiness.
Footnotes
1
Of course, our analysis of institutional distrustworthiness cannot be decoupled from understanding how (dis)trust mechanisms arise since these two concepts are necessarily intertwined.
 
2
The philosophical literature on trust is vast but McLeod (2015) provides an overview. Durán and Pozzi (under review) also offer a review of the literature on trustworthy AI tailored to the analytic distinction between reliance and some “extra factor” largely adopted in the standard philosophical literature on trust.
 
3
It is worth noting that Hawley’s account is different from motive-based accounts of trust in that the motivation of the trustee to uphold the trust relationship is not based on goodwill (Jones 1996) or the fact that they want to maintain or strengthen their relationship to the trustor (Hardin 2002). According to Hawley’s account the motivation of the trustee to fulfil the trust relation comes from the commitment itself (Hawley 2014).
 
Literature
go back to reference Baker, R. 2013. Before bioethics: A history of American medical ethics from the colonial period to the bioethics revolution. Oxford: Oxford University Press.CrossRef Baker, R. 2013. Before bioethics: A history of American medical ethics from the colonial period to the bioethics revolution. Oxford: Oxford University Press.CrossRef
go back to reference Benjamin, R. 2019. Assessing risk, automating racism. Science 366(6464): 421–422.CrossRef Benjamin, R. 2019. Assessing risk, automating racism. Science 366(6464): 421–422.CrossRef
go back to reference Bjerring, J. C., and J. Busch. 2021. Artificial intelligence and patient-centered decision-making. Philosophy & Technology 34: 349–371.CrossRef Bjerring, J. C., and J. Busch. 2021. Artificial intelligence and patient-centered decision-making. Philosophy & Technology 34: 349–371.CrossRef
go back to reference Braun, M., H. Bleher, and P. Hummel. 2021. A leap of faith: Is there a formula for trustworthy AI? Hastings Center Report 51(3): 17–22.CrossRef Braun, M., H. Bleher, and P. Hummel. 2021. A leap of faith: Is there a formula for trustworthy AI? Hastings Center Report 51(3): 17–22.CrossRef
go back to reference Carel, H., and I. J. Kidd. 2021. Institutional opacity, epistemic vulnerability, and institutional testimonial justice. International Journal of Philosophical Studies 29(4): 473–496.CrossRef Carel, H., and I. J. Kidd. 2021. Institutional opacity, epistemic vulnerability, and institutional testimonial justice. International Journal of Philosophical Studies 29(4): 473–496.CrossRef
go back to reference Cirillo, D., S. Catuara-Solarz, C. Morey, E. Guney, L. Subirats, S. Mellino, and N. Mavridis. 2020. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ Digital Medicine 3(1): 1–11.CrossRef Cirillo, D., S. Catuara-Solarz, C. Morey, E. Guney, L. Subirats, S. Mellino, and N. Mavridis. 2020. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ Digital Medicine 3(1): 1–11.CrossRef
go back to reference Coeckelbergh, M. 2020. Artificial Intelligence, responsibility attribution, and a relational justification of Explainability. Science and Engineering Ethics 26: 2051–2068.CrossRef Coeckelbergh, M. 2020. Artificial Intelligence, responsibility attribution, and a relational justification of Explainability. Science and Engineering Ethics 26: 2051–2068.CrossRef
go back to reference Curry, T. J. 2020. Conditioned for death: Analysing black mortalities from Covid-19 and police killings in the United States as a syndemic interaction. Comparative American Studies an International Journal 17(3–4): 257–270.CrossRef Curry, T. J. 2020. Conditioned for death: Analysing black mortalities from Covid-19 and police killings in the United States as a syndemic interaction. Comparative American Studies an International Journal 17(3–4): 257–270.CrossRef
go back to reference Davidson, L. J., and M. Satta. 2021. Justified social distrust. In Social Trust, eds. Kevin Vallier and Michael Weber, 122–148. New York: Routledge. Davidson, L. J., and M. Satta. 2021. Justified social distrust. In Social Trust, eds. Kevin Vallier and Michael Weber, 122–148. New York: Routledge.
go back to reference Demir-Doğuoğlu, H., and C. McLeod. 2023. Toward a feminist theory of distrust. In The moral psychology of trust, eds. David Collins, Iris Vidmar Jovanović, and Mark Alfano, 125–143. London: Lexington Books. Demir-Doğuoğlu, H., and C. McLeod. 2023. Toward a feminist theory of distrust. In The moral psychology of trust, eds. David Collins, Iris Vidmar Jovanović, and Mark Alfano, 125–143. London: Lexington Books.
go back to reference Durán, J. M., and K. R. Jongsma. 2021. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics 47(5): 329–335. Durán, J. M., and K. R. Jongsma. 2021. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics 47(5): 329–335.
go back to reference Durán, J. M., and G. Pozzi. (under review). What is trustworthy AI?. Durán, J. M., and G. Pozzi. (under review). What is trustworthy AI?.
go back to reference Freiman, O. 2023. Making sense of the conceptual nonsense ‘trustworthy AI’. AI and Ethics 3(4): 1351–1360.CrossRef Freiman, O. 2023. Making sense of the conceptual nonsense ‘trustworthy AI’. AI and Ethics 3(4): 1351–1360.CrossRef
go back to reference Fricker, M. 2023. Diagnosing institutionalized ‘Distrustworthiness’. The Philosophical Quarterly 73(3): 722–742.CrossRef Fricker, M. 2023. Diagnosing institutionalized ‘Distrustworthiness’. The Philosophical Quarterly 73(3): 722–742.CrossRef
go back to reference Graham, S. S. 2022. The doctor and the algorithm: Promise, peril, and the future of health AI. Oxford: Oxford University Press.CrossRef Graham, S. S. 2022. The doctor and the algorithm: Promise, peril, and the future of health AI. Oxford: Oxford University Press.CrossRef
go back to reference Hardin, R. 2002. Trust and trustworthiness. New York: Russell Sage Foundation. Hardin, R. 2002. Trust and trustworthiness. New York: Russell Sage Foundation.
go back to reference Hatherley, J. J. 2020. Limits of trust in medical AI. Journal of Medical Ethics 46(7): 478–481.CrossRef Hatherley, J. J. 2020. Limits of trust in medical AI. Journal of Medical Ethics 46(7): 478–481.CrossRef
go back to reference Hawley, K. J. 2017. Trust, distrust and epistemic injustice. In The Routledge handbook of epistemic injustice, eds. Ian James Kidd, José Medina and Gaile Pohlhaus Jr, 69–78. New York: Routledge.CrossRef Hawley, K. J. 2017. Trust, distrust and epistemic injustice. In The Routledge handbook of epistemic injustice, eds. Ian James Kidd, José Medina and Gaile Pohlhaus Jr, 69–78. New York: Routledge.CrossRef
go back to reference Ho, A. 2008. The individualist model of autonomy and the challenge of disability. Journal of Bioethical Inquiry 5: 193–207.CrossRef Ho, A. 2008. The individualist model of autonomy and the challenge of disability. Journal of Bioethical Inquiry 5: 193–207.CrossRef
go back to reference Holland, S., J. Cawthra, T. Schloemer, and P. Schröder-Bäck. 2022. Trust and the acquisition and use of public health information. Health Care Analysis 30: 1–17. Holland, S., J. Cawthra, T. Schloemer, and P. Schröder-Bäck. 2022. Trust and the acquisition and use of public health information. Health Care Analysis 30: 1–17.
go back to reference Hull, G. 2023. Dirty data labeled dirt cheap: Epistemic injustice in machine learning systems. Ethics and Information Technology 25(3): 38.CrossRef Hull, G. 2023. Dirty data labeled dirt cheap: Epistemic injustice in machine learning systems. Ethics and Information Technology 25(3): 38.CrossRef
go back to reference Krishnamurthy, M. ed. 2015. (White) Tyranny and the democratic value of distrust. The Monist 98(4): 391–406. Krishnamurthy, M. ed. 2015. (White) Tyranny and the democratic value of distrust. The Monist 98(4): 391–406.
go back to reference Ledford, H. 2019. Millions affected by racial bias in health-care algorithm. Nature 574(31): 2. Ledford, H. 2019. Millions affected by racial bias in health-care algorithm. Nature 574(31): 2.
go back to reference Medina, J. 2013. The epistemology of resistance: Gender and racial oppression, epistemic injustice, and the social imagination. Oxford: Oxford University Press.CrossRef Medina, J. 2013. The epistemology of resistance: Gender and racial oppression, epistemic injustice, and the social imagination. Oxford: Oxford University Press.CrossRef
go back to reference Medina, J. 2020. Trust and Epistemic Injustice. In The Routledge handbook of trust and philosophy, eds. Ian James Kidd, José Medina and Gaile Pohlhaus Jr, 52–63. New York: Routledge. Medina, J. 2020. Trust and Epistemic Injustice. In The Routledge handbook of trust and philosophy, eds. Ian James Kidd, José Medina and Gaile Pohlhaus Jr, 52–63. New York: Routledge.
go back to reference Newman, A. M. 2022. Moving beyond mistrust: Centering institutional change by decentering the white analytical lens. Bioethics 36(3): 267–273.CrossRef Newman, A. M. 2022. Moving beyond mistrust: Centering institutional change by decentering the white analytical lens. Bioethics 36(3): 267–273.CrossRef
go back to reference Nickel, P. J. 2022. Trust in medical artificial intelligence: A discretionary account. Ethics and Information Technology 24(1): 7.CrossRef Nickel, P. J. 2022. Trust in medical artificial intelligence: A discretionary account. Ethics and Information Technology 24(1): 7.CrossRef
go back to reference Obermeyer, Z., B. Powers, C. Vogeli, and S. Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464): 447–453.CrossRef Obermeyer, Z., B. Powers, C. Vogeli, and S. Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464): 447–453.CrossRef
go back to reference Pellegrino, E. D., and D. C. Thomasma. 1993. The virtues in medical practice. New York: Oxford University Press.CrossRef Pellegrino, E. D., and D. C. Thomasma. 1993. The virtues in medical practice. New York: Oxford University Press.CrossRef
go back to reference Pozzi, G. 2023. Testimonial injustice in medical machine learning. Journal of Medical Ethics 49(8): 536–540.CrossRef Pozzi, G. 2023. Testimonial injustice in medical machine learning. Journal of Medical Ethics 49(8): 536–540.CrossRef
go back to reference Robertson, C., A. Woods, K. Bergstrand, J. Findley, C. Balser, and M. J. Slepian. 2023. Diverse patients’ attitudes towards Artificial Intelligence (AI) in diagnosis. PLOS Digital Health 2(5): e0000237.CrossRef Robertson, C., A. Woods, K. Bergstrand, J. Findley, C. Balser, and M. J. Slepian. 2023. Diverse patients’ attitudes towards Artificial Intelligence (AI) in diagnosis. PLOS Digital Health 2(5): e0000237.CrossRef
go back to reference Segers, S., and H. Mertes. 2022. The curious case of trust in the light of changing doctor–patient relationships. Bioethics 36(8): 849–857.CrossRef Segers, S., and H. Mertes. 2022. The curious case of trust in the light of changing doctor–patient relationships. Bioethics 36(8): 849–857.CrossRef
go back to reference Sherlock, R. 1986. Reasonable men and sick human beings. The American Journal of Medicine 80(1): 2–4.CrossRef Sherlock, R. 1986. Reasonable men and sick human beings. The American Journal of Medicine 80(1): 2–4.CrossRef
go back to reference Smith, H. 2021. Clinical AI: Opacity, accountability, responsibility and liability. AI & Society 36(2): 535–545.CrossRef Smith, H. 2021. Clinical AI: Opacity, accountability, responsibility and liability. AI & Society 36(2): 535–545.CrossRef
go back to reference Specker Sullivan, L. 2023. Climates of distrust in medicine. Hastings Center Report 53: S33–S38. Specker Sullivan, L. 2023. Climates of distrust in medicine. Hastings Center Report 53: S33–S38.
go back to reference Walker, M. U. 2006. Moral repair: Reconstructing moral relations after wrongdoing. New York: Cambridge University Press.CrossRef Walker, M. U. 2006. Moral repair: Reconstructing moral relations after wrongdoing. New York: Cambridge University Press.CrossRef
go back to reference Wilson, Y. 2022. Is Trust Enough? Anti-black racism and the perception of Black Vaccine Hesitancy. Hastings Center Report 52: S12–S17.CrossRef Wilson, Y. 2022. Is Trust Enough? Anti-black racism and the perception of Black Vaccine Hesitancy. Hastings Center Report 52: S12–S17.CrossRef
Metadata
Title
Why we should talk about institutional (dis)trustworthiness and medical machine learning
Authors
Michiel De Proost
Giorgia Pozzi
Publication date
13-11-2024
Publisher
Springer Netherlands
Published in
Medicine, Health Care and Philosophy / Issue 1/2025
Print ISSN: 1386-7423
Electronic ISSN: 1572-8633
DOI
https://doi.org/10.1007/s11019-024-10235-6