California attorney general probes bias in healthcare algorithms

A push of letters from California Attorney General Rob Bonta to leaders of hospitals and other healthcare facilities dispatched August 31, 2022 signaled the launch of a government investigation into biases in healthcare algorithms that contribute to important health care decisions. health care. The survey is part of an initiative by the California Attorney General’s (AG) office to address disparities in health care access, quality and outcomes and ensure compliance with non-discrimination laws. States. Responses are due no later than October 15, 2022 and should include a list of all decision tools used that contribute to clinical decision support, population health management, operational optimization, or payment management; the purposes for which the tools are used; and the name and contact details of those responsible for “evaluating the purpose and use of these tools and ensuring that they do not have a disparate impact based on race or other protected characteristics”.

The Press release the probe announcement describes healthcare algorithms as a rapidly growing tool used to perform various functions in the healthcare industry. According to the California AG, if software is used to determine a patient’s medical needs, proper review, training, and usage guidelines must be incorporated by hospitals and health care facilities to prevent the algorithms from have unintended consequences for vulnerable patient groups. An example cited in the AG press release is an artificial intelligence (AI) algorithm created to predict patient outcomes may be based on a population that does not accurately represent the patient population to which the tool is applied. An AI algorithm created to predict future health care needs based on past health care costs may distort the needs of black patients who often face greater barriers to accessing care, giving the impression that their health care costs are lower.

Unsurprisingly, the announcement of the AG investigation follows research summarized in a Pew Charitable Trusts blog post highlighting biases in AI-enabled products and a series of discussions between the Food and Drug Administration (FDA) and software as medical device stakeholders (including patients, suppliers, plans for health and software publishers) concerning the eliminating bias in artificial intelligence and machine learning technologies. As discussed in more detail in our series on FDA’s Artificial Intelligence/Machine Learning Medical Devices Workshop, the FDA is currently grappling with how to address data quality, bias, and health equity when is about the use of AI algorithms in the software it regulates.

Stepping back to consider the practical constraints of hospitals and healthcare facilities, the AG’s investigation could put these entities in trouble. Algorithms used in commercially available software may be proprietary, and in any case hospitals may not have the resources to independently assess software bias. Moreover, if the FDA is still figuring out how to fix these issues, it seems unlikely that hospitals are in a better position to fix them.

Nonetheless, the AG’s letter suggests that failure to “properly assess” the use of AI tools in hospitals and other health care facilities could violate state non-discrimination laws and laws. related federal authorities and indicates that investigations will follow these requests for information. Therefore, before responding, hospitals should carefully consider their AI tools currently in use, the purposes for which they are used, and the safeguards currently in place to counter any bias that may be introduced by an algorithm. For instance:

  • When does an individual review AI-generated recommendations and then make a decision based on their own judgement?
  • What type of non-discrimination and anti-bias training do people using AI tools receive each year?
  • What type of review is done on software vendors and features before purchasing the software?
  • Is any of the software used certified or used by a government program?
  • What type of testing was performed by the software vendor to address issues of data quality, bias, and health equity?

On the other hand, software vendors whose AI tools are used in California healthcare facilities must be prepared to respond to customer inquiries about their AI algorithms and how the quality and bias data have been evaluated, for example:

  • Is the technology locked in or does it involve continuous learning?
  • How does the algorithm work and how was it trained?
  • How accurate are different patient groups, including vulnerable populations?

Subscribe to viewpoints

Sharon D. Cole