Tuesday, November 5, 2019

Study Highlights Racial Bias in Optum Risk-Prediction Algorithm


To improve care for patients with the most complex health needs, many providers and payers turn to risk-prediction tools that use an algorithm to determine which patients need more intense care management. But a recent study, published in the journal Science, found that one such widely used algorithm exhibits significant racial bias by assigning black patients the same level of risk as white patients even when they are far sicker.
This study garnered significant media attention, and at least one state's regulators launched an investigation into UnitedHealth Group, whose Optum subsidiary sells Impact Pro, the data analytics program that researchers studied.
Brian Powers, M.D., one of the study's authors and a researcher at Brigham and Women's Hospital, says that "the algorithm did a great job of what it was specifically designed to do, which was predict future health care costs." The problem is that the organizations deploying the tool often "use health care costs as a proxy for health care need," he says, and black patients tend to cost the health system less because of a "lack of access to care due to structural inequalities, and a variety of other issues that have been well documented." So while there is a correlation between high-risk patients and high health care spending, just looking at expenditures doesn't paint a truly accurate picture of patients' health care needs.
Rich Caruana, a Microsoft Corp. senior researcher who studies machine learning in health care, says he was "not at all surprised" to learn that researchers uncovered hidden bias in a predictive algorithm.
"Most of what machine learning is doing is right, but in addition to these things it's doing really right, roughly 5% of what it's learning are these sort of silly, wrong things," he continues. "These are known as treatment effects — we're seeing patients' risk as more or less based on the treatment that they receive."

No comments:

Post a Comment