Skip to main content
Top
Published in:

07-12-2024 | Artificial Intelligence | Scientific Contribution

Clouds on the horizon: clinical decision support systems, the control problem, and physician-patient dialogue

Author: Mahmut Alpertunga Kara

Published in: Medicine, Health Care and Philosophy | Issue 1/2025

Login to get access

Abstract

Artificial intelligence-based clinical decision support systems have a potential to improve clinical practice, but they may have a negative impact on the physician-patient dialogue, because of the control problem. Physician-patient dialogue depends on human qualities such as compassion, trust, and empathy, which are shared by both parties. These qualities are necessary for the parties to reach a shared understanding -the merging of horizons- about clinical decisions. The patient attends the clinical encounter not only with a malfunctioning body, but also with an ‘unhomelike’ experience of illness that is related to a world of values and meanings, a life-world. Making wise individual decisions in accordance with the patient’s life-world requires not only scientific analysis of causal relationships, but also listening with empathy to the patient’s concerns. For a decision to be made, clinical information should be interpreted considering the patient’s life-world. This side of clinical practice is not a job for computers, and they cannot be final decision-makers. On the other hand, in the control problem users blindly accept system output because of over-reliance, rather than evaluating it with their own judgement. This means over-reliant parties leave their place in the dialogue to the system. In this case, the dialogue may be disrupted and mutual trust may be lost. Therefore, it is necessary to design decision support systems to avoid the control problem and to limit their use when this is not possible, in order to protect the physician-patient dialogue.
Footnotes
1
I am grateful to an anonymous reviewer for the suggesting for further elaboration and pointing useful resources on this topic.
 
2
Pursuing the same idea, Mecacci and Santoni de Sio argue that the MHC approach requires that the function of a multi-element system should be responsive to the user’s plans, the designers’ intentions, and the norms of society. Thus, although there is less human intervention and more autonomy of the AI system, effective human control can be achieved through technical and institutional infrastructure. For this to happen, the technical system must be designed to respond to the relevant plans and intentions (Mecacci and Santoni de Sio 2020). It is fair to suggest that as far as the system can be designed to be more responsive to human reasons, more automation would help to better fulfil the tracking condition. Nevertheless, in the absence of explicit instructions for every conceivable concrete scenario, it may be impossible to be sure that the system will respond in a manner similar to that of a human being. Sometimes rules simply do not apply or conflict between themselves, but the agent should decide anyway, depending upon judgement and shouldering the responsibility for a wrong decision. As the social part of the socio-technical environment becomes more complex, the likelihood of encountering such situations may increase; this may mean that the space of autonomy for the technical elements of the system may need to be reduced. Even outside the complex context of medicine, it is possible to imagine situations where more automation does not mean more human-like reason-responsiveness. If the design produces counter-intuitive results, it may be questionable in what sense it is meaningful, even if effective control has been achieved. For example, if a car has a system that prevents it from exceeding the speed limit, even though it restricts the driver, it could be said that the tracking condition is achieved according to the goals of policymakers and legislators. However, if you have a passenger in your car who you fear will die if you do not exceed the speed limit, and you judge the road to be relatively clear so that you will not endanger the lives of others even if you exceed the speed limit, you might decide to take responsibility for exceeding the limit. The tracing condition might have been achieved by placing the responsibility for the speed limit system on human legislators, but you cannot expect legislators to make the rule based on the possibility that your passenger’s aneurysm might suddenly burst at a time when your car is the only vehicle on the motorway. Of course, this would be a rare exception within an acceptable margin of error compared to the lives the restrictive rule would save, but the gap between general rules and specific cases should not be underestimated. The attempt to satisfy the tracking condition by non-human elements of the system can be a serious design challenge and, if unsuccessful, can create a judgment gap in the interpretation of the general rules for specific cases.
 
3
I would like to thank an anonymous reviewer for suggesting elaboration on this part.
 
4
I would like to thank an anonymous reviewer for suggesting this point.
 
5
I am grateful to an anonymous reviewer for the suggestion for further elaboration of this part.
 
Literature
go back to reference Gomez-Cabello, Cesar A., Sahar Borna, Sophia Pressman, Syed Ali Haider, and Clifton R. Haider, Antonio J. Forte. 2024. Artificial-Intelligence-based clinical decision support systems in primary care: A scoping review of current clinical implementations. European Journal of Investigation in Health Psychology and Education 14(3): 685–698. https://doi.org/10.3390/ejihpe14030045CrossRef Gomez-Cabello, Cesar A., Sahar Borna, Sophia Pressman, Syed Ali Haider, and Clifton R. Haider, Antonio J. Forte. 2024. Artificial-Intelligence-based clinical decision support systems in primary care: A scoping review of current clinical implementations. European Journal of Investigation in Health Psychology and Education 14(3): 685–698. https://​doi.​org/​10.​3390/​ejihpe14030045CrossRef
go back to reference Harbarth, Lydia, Eva Gößwein, Daniel Bodemer, and Lenka Schnaubert. 2024. (Over)trusting AI recommendations: How system and person variables affect dimensions of complacency. International Journal of Human-Computer Interaction Published online: 22 January 2024:1–20. https://doi.org/10.1080/10447318.2023.2301250 Harbarth, Lydia, Eva Gößwein, Daniel Bodemer, and Lenka Schnaubert. 2024. (Over)trusting AI recommendations: How system and person variables affect dimensions of complacency. International Journal of Human-Computer Interaction Published online: 22 January 2024:1–20. https://​doi.​org/​10.​1080/​10447318.​2023.​2301250
go back to reference Haselager, Pim, Hanna Schraffenberger, Serge Thill, Simon Fischer, Pablo Lanillos, Sebastiaan, van de Groes, and and Miranda van Hooff. 2023. Reflection machines: Supporting effective human oversight over medical decision support systems. Cambridge Quarterly of Healthcare Ethics. Published online 10 January 2023:1–10. https://doi.org/10.1017/S0963180122000718 Haselager, Pim, Hanna Schraffenberger, Serge Thill, Simon Fischer, Pablo Lanillos, Sebastiaan, van de Groes, and and Miranda van Hooff. 2023. Reflection machines: Supporting effective human oversight over medical decision support systems. Cambridge Quarterly of Healthcare Ethics. Published online 10 January 2023:1–10. https://​doi.​org/​10.​1017/​S096318012200071​8
go back to reference Miller, Randolph A., W. Kenneth, and Goodman. 1998. Ethical challenges in the use of decision-support software in clinical practice. In Ethics, Computing, and Medicine: Informatics and the Transformation of Health Care, ed. W. Kenneth, and Goodman. 102–115. New York: Cambridge University Press. Miller, Randolph A., W. Kenneth, and Goodman. 1998. Ethical challenges in the use of decision-support software in clinical practice. In Ethics, Computing, and Medicine: Informatics and the Transformation of Health Care, ed. W. Kenneth, and Goodman. 102–115. New York: Cambridge University Press.
go back to reference Moor, James H. 1979. Are there decisions computers should never make. Nature and System 1(4): 217–229. Moor, James H. 1979. Are there decisions computers should never make. Nature and System 1(4): 217–229.
go back to reference Snapper, John W. 1998. Responsibility for computer-based decisions in health care. In Ethics, Computing, and Medicine: Informatics and the Transformation of Health Care, ed. W. Kenneth, and Goodman. 43–56. New York: Cambridge University Press. Snapper, John W. 1998. Responsibility for computer-based decisions in health care. In Ethics, Computing, and Medicine: Informatics and the Transformation of Health Care, ed. W. Kenneth, and Goodman. 43–56. New York: Cambridge University Press.
go back to reference Sujan, Mark, Dominic Furniss, Kath Grundy, Howard Grundy, David Nelson, Matthew Elliott, Sean White, Ibrahim Habli, and Nick Reynolds. 2019. Human factors challenges for the safe use of artificial intelligence in patient care. BMJ Health & Care Informatics 26(1): e100081. https://doi.org/10.1136/bmjhci-2019-100081CrossRef Sujan, Mark, Dominic Furniss, Kath Grundy, Howard Grundy, David Nelson, Matthew Elliott, Sean White, Ibrahim Habli, and Nick Reynolds. 2019. Human factors challenges for the safe use of artificial intelligence in patient care. BMJ Health & Care Informatics 26(1): e100081. https://​doi.​org/​10.​1136/​bmjhci-2019-100081CrossRef
go back to reference Svenaeus, Fredrik. 2000a. The hermeneutics of Medicine and the Phenomenology of Health: Steps towards a philosophy of Medical Practice. Dordrecht: Springer.CrossRef Svenaeus, Fredrik. 2000a. The hermeneutics of Medicine and the Phenomenology of Health: Steps towards a philosophy of Medical Practice. Dordrecht: Springer.CrossRef
go back to reference Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to calculation. San Francisco: W. H. Freeman and Company. Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to calculation. San Francisco: W. H. Freeman and Company.
go back to reference Wysocki, Oskar, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Dónal Landers, Rebecca Lee, André, and Freitas. 2023. Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making. Artificial Intelligence 316: 103839. https://doi.org/10.1016/j.artint.2022.103839CrossRef Wysocki, Oskar, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Dónal Landers, Rebecca Lee, André, and Freitas. 2023. Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making. Artificial Intelligence 316: 103839. https://​doi.​org/​10.​1016/​j.​artint.​2022.​103839CrossRef
Metadata
Title
Clouds on the horizon: clinical decision support systems, the control problem, and physician-patient dialogue
Author
Mahmut Alpertunga Kara
Publication date
07-12-2024
Publisher
Springer Netherlands
Published in
Medicine, Health Care and Philosophy / Issue 1/2025
Print ISSN: 1386-7423
Electronic ISSN: 1572-8633
DOI
https://doi.org/10.1007/s11019-024-10241-8