—Jessica Hamzelou
This week, I’m writing about AI-based tools that can help guide end-of-life care. We’re talking about life-or-death decisions for people who are very ill.
Often, the patient is unable to make this decision. Instead, the task is left to the surrogate. It can be a very difficult and distressing experience.
A group of ethicists have an idea for an AI tool that they believe could help make things easier. The tool would be trained on information about a person, extracted from things like emails, social media activity, and search history. And it could use those factors to predict what a patient might choose. The team describes the tool, which is not yet built, as a “digital psychological twin.”
There are many questions that need to be answered before introducing something like this into a hospital or treatment facility. How accurate will it be? How can we prevent it from being misused? But perhaps the biggest question is: Who would want to use it? Read the full article.
This article first appeared in The Checkup, our weekly newsletter bringing you the inside scoop on all things health and biotechnology. Sign up Receive it by email every Thursday.
If you’re interested in AI and human mortality, check out:
+ The Messy Morality of Letting AI Make Life and Death Decisions. Automation can help us make difficult choices, but we can’t do it alone. Read the full article.
+ … but AI systems reflect the humans who created them, and they are full of biases, so we need to carefully question how much decision-making we should actually hand over.