Predict race and ethnicity to ensure fair algorithms for healthcare decision-making

Algorithms are currently used to help with a wide range of healthcare decisions. Despite the general utility of these healthcare algorithms, there is growing recognition that they can lead to unintended racially discriminatory practices, raising concerns about the potential for algorithmic bias. An intuitive precaution against such bias is to remove race and ethnicity information as inputs to healthcare algorithms, mimicking the idea of ​​“race-blind” decisions. However, we argue that this approach is flawed. Knowledge, not ignorance, of race and ethnicity is necessary to combat algorithmic bias. When race and ethnicity are observed, many methodological approaches can be used to enforce fair algorithmic performance. When race and ethnicity information is not available, which is often the case, imputing it can expand the possibilities not only to identify and assess algorithmic biases, but also to combat them in contexts. clinical and non-clinical. A valid imputation method, such as Bayesian enhanced surname geocoding, can be applied to standard data collected by public and private payers and provider entities. We describe two applications in which imputing race and ethnicity can help mitigate potential algorithmic biases: fair disease screening algorithms using machine learning and fair performance incentives.

Sharon D. Cole