Would it be a gift or curse to know when you would die? It is a real question being asked at Stanford Hospital, where a deep-learning algorithm is on a pilot run, and can accurately calculate the risk of a patient passing away within the next twelve months. Developed by the Stanford ML Group at Stanford University, the algorithm uses Electronic Health Record (EHR) data to predict a patient’s risk of death within a 3-12 month timeline, and identify the need for palliative care [1].
This advanced care planning (ACP) model started its trial run in July 2020 and has been used on the data of more than 11, 000 patients [2]. Of those patients, the algorithm has flagged 20% for the associated risk of death, of which 60% passed away within 12 months [2]. The model thoroughly evaluates a patient’s medical records, including data on age, gender, race, disease classifications, medical history, and doctor’s notes.
At face value, the predictions of this model would be a big first step in a conversation with patients and their care. However, it needs to be asked, whether a patient should be told that the prediction is based on a complex calculation with several variables, and how it would be justified. Especially for a patient with no previous understanding of artificial intelligence and deep learning, would it be necessary to argue for the model’s validity? Stanford has yet to do a randomized controlled trial, and it goes without saying that the protocol around the model must be vastly improved and carefully decided.
This type of technology could be the face of a medical breakthrough, yet simultaneously, there are several ethical implications associated. The technology and matter at hand revolve around death, a sensitive and personal topic. In the United States, palliative care services are difficult to access and insufficient for the general demand. With a death-predicting algorithm, accessing palliative care on time and when a patient truly needs it would become much easier. However, how would a patient, caregiver, or physician be able to handle such information with care? More importantly, what if the prediction is wrong? While this algorithm is an extraordinary product of innovation and technology, similar to many other biotechnology breakthroughs, there are a series of ethical issues that need to be properly addressed, otherwise, the line between development and maldevelopment blurs beyond recognition.
Read about Stanford ML Group’s work and their award-winning paper here: https://stanfordmlgroup.github.io/projects/improving-palliative-care/
About the Author:
This article was written by Mirvat Chowdhury, a chemical engineering student at the University of Toronto. She is interested in environmental engineering, and enjoys learning about biotechnology developments related to sustainability.
References:
[1] Avati, A., Jung, K., Harman, S. et al. Improving palliative care with deep learning. BMC Med Inform Decis Mak 18, 122 (2018). https://doi.org/10.1186/s12911-018-0677-8
[2] Scheimer, Dorey, et al. “Smarter Health: The Ethics of AI in Health Care.” Www.wbur.org, 3 June 2022, www.wbur.org/onpoint/2022/06/03/smarter-health-the-ethics-of-ai-death-predictor. Accessed 21 June 2022.
Comments