Over 50 years ago, the US House of Representatives passed the Civil Rights Act of 1964, marking a great achievement for the civil rights activists of the 60s. Put simply, this official US document bars all discrimination on the basis of race, sex, religion, color, or national origin. The document explicitly bars discrimination in “federally assisted programs,” of which health care constitutes a part of (Our Documents - Transcript of Civil Rights Act (1964)). In the 60s, there was no doubt that this document was protecting humans from other biased humans. As such, there was no need to specify the source of the discrimination.
Now, though, with AI algorithms becoming prominent in private-controlled business sectors, the once self-evident phrasing of the Civil Rights Act of 1964 might have to be given a second glance. Does the non-discriminatory blanket of the Civil Rights Act of 1964 cover human-trained algorithms too? The answer: It should! But has AI been making decisions in an unprejudiced manner? Well, not always, and especially not in the healthcare industry. Just take, for instance, Optum’s high-risk patient identification algorithm. With a glance at the diverse portrayal of patients on their website, you would probably think twice if I told you that this health services innovation company had created a racially-biased algorithm. As illogical as it may sound, researchers in 2019 discovered a significant disparity between black and white patients being classified for high-level health risks. In fact, the researchers estimated that over 50% of black individuals who truly needed extra-care were not being classified properly by the algorithm.
The algorithm wasn’t inherently biased though, nor were the people who made the AI algorithm (a conclusion that can be easy to jump to). Rather, the fault behind the discriminatory results of the algorithm lay within an unequal correlation between healthcare expenses and race. Because of poverty disparities, geographical differences, and several other societal factors, black patients who need the same level of care as white patients are pooling in less expenses than white patients (direct discrimination from physicians might also be playing a factor in this statistic). Under the false assumption that there would exist a direct correlation between the amount of money spent on a patient and the severity of the patient’s illness, the algorithm incorrectly designated many black patients as being healthier than they really were (check out Study finds racial bias in Optum algorithm | Healthcare Finance News for more information. To see the full scientific article detailing the study of Optum’s algorithm, check out Dissecting racial bias in an algorithm used to manage the health of populations | Science (sciencemag.org)).
Because Optum’s algorithm was widely used by health systems, this “meager monetary misconception” affected millions of patients! Among the unfortunate results of the algorithm, the importance of validating the assumptions of AI algorithms bleeds through.
- Jon Cili
Комментарии