Is AI the appropriate source of moral guidance on which patients should undergo kidney transplants?
That is a question contemplated by the Boffins, who belong to Duke University, Carnegie Mellon, Oxford University and Yale.
in Pre-press paper This month, the title “Can AI model the complexity of human moral decision-making? Qualitative study of kidney allocation decisions” may imagine more than 18,000 words by authors Vijay Keswani, Vincent-Armstrong, Walter Sinnott-Armstrong, Breanna K. Nguyen, Hoda Heidari and Jana Schaich Borg.
Nevertheless, this paper is worth wandering through the process of moral decision making, as it emphasizes the complexity of translating beliefs into actions and replicating the process in software.
According to the US National Institutes of Health, Over 800,000 In the US, suffering from end-stage renal disease means their survival depends on either regular dialysis or kidney transplants.
National Kidney Foundation estimate It also points out that 12 people die every day in the US due to no kidney transplants, and five deceased donor kidneys are abandoned.
Therefore, there is reason to believe that kidney allocations can be handled better.
The authors of this study acknowledge from the start that previous research in psychology shows the complexity of human moral decision-making.
“So it's not surprising that AI can't capture all the nuances involved,” they explain. “…Even so, despite the idiosyncraticity (e.g., a noisy response to the same question), there is still the question whether AI can capture the normative nature of human moral decision-making, namely the morally relevant factors, developing informed preferences for moral attributes and values, and reducing the final information to make final decisions.
“In other words, can AI model important elements of human moral decision-making?”
If you have already answered “No”, you can skip to the end. But if you want to see a little more, you need to assess how people make moral decisions and see if AI emulates the process to some degree.
“The possibility of AI potential in the moral domain is primarily related to scalability and the possibility of addressing human cognitive bias (e.g., addressing “errors” in decisions caused by fatigue).” Register.
“At the same time, these benefits can only be realized if AI can robustly model the way people make ideal moral decisions.
The researchers conducted what they described as semi-structured interviews with 20 participants and paid at least $20.
Respondents were people, not medication, and were asked general questions about the best way to determine which patients should obtain their kidneys. Other questions asked which of the two virtual patients would choose to weigh and receive the transplant.
- A long-standing life expected to be obtained from a transplant.
- Number of dependents;
- obesity level;
- Weekly work after transplant.
- The year of the port waiting list.
- Number of serious crimes committed in the past.
Participants were then asked to assess how well the selected strategies were in sync with the decision-making process and to express their opinions on the potential benefits and concerns that include AI in the kidney allocation process.
Naturally, respondents weighed several criteria over others. Some people liked some young people, while others expressed concern about discriminating against elders. Some people considered lifestyle choices (such as smoking and drinking), while others felt that wasn't important.
The views expressed by participants were sometimes changed as decisions were contemplated.
That's not surprising, as people's moral frameworks can be fluid. Modeling that seems to fail in AI – someone will always think that AI wants.
However, the process of moral drift, which the author describes as a “dynamic learning process,” is one of the factors that need to be incorporated into the AI model when moral judgment is required.
Therefore, the authors considered the mathematics used to form the model as different approaches fit different decision-making strategies. Linear and decision rule models have the advantage of being interpretable, but these are not necessarily consistent with human decision-making processes. The authors point out that other approaches (neural networks or random forests) generate models that cannot be interpreted.
Therefore, this paper concludes that current methods of creating AI models are not suitable for modeling human reasoning.
As for how respondents thought about AI, there was a perception that AI could help counter human bias and that AI could help clinical support. But the bottom line was that people didn't want the AI to decide who would get the kidneys.
“Many people were optimistic about recognizing cognitive flaws in human moral decision-making and mitigating these flaws,” the author concludes. “However, they still expressed their belief in the qualifications of human experts, and preferred that AI postpone the final decision to the experts.”
Keswani was asked whether certain procedural systems could handle some of the presumed benefits of AI involvement, such as countering human bias.
“Some decision biases (e.g. biases for a particular group) can be handled through better procedures, such as group blind decisions,” he said.
“However, I don't think that all decision errors can be addressed through explicit top-down procedures, particularly taking into account the heterogeneity of people's decision-making processes. So I think there is growing interest in alternative bottom-up approaches, where we learn the “ideal” preferences of individuals/communities and align AI to these preferences.
“Of course, once again, to achieve consistency with moral preferences, we need to have a good calculation foundation for moral decision-making, and we don't have that now (as a field).” ®