Racial and other biases in AI algorithms for healthcare can be tackled with public support

Members of the public are invited to help eliminate biases based on race and other disadvantaged groups in artificial intelligence algorithms for healthcare.

Health researchers are asking for help explaining how “minorized” groups, who are actively disadvantaged by social constructs, would not see the future benefits of using AI in healthcare. The team, led by the University of Birmingham and Birmingham University Hospitals, write in natural medicine today as they launch a consultation on a set of standards they hope will reduce biases known to exist in AI algorithms.

There is growing evidence that some AI algorithms work less well for certain groups of people, especially those from racial/ethnic minority groups. This is partly due to biases in the datasets used to develop AI algorithms. This means that patients from black and minority ethnic groups may receive inaccurate predictions, leading to misdiagnosis and mistreatment.

STANDING Together is an international collaboration that will develop best practice standards for healthcare datasets used in artificial intelligence, ensuring that they are diverse, inclusive and do not leave behind underserved groups. represented or understated. The project is funded by the NHS AI Lab and the Health Foundation, and the funding is administered by the National Institute for Health and Care Research, the research partner of the NHS, public health and social services, as part of the NHS AI Lab. AI Ethics Initiative.

Dr Xiaoxuan Liu from the Institute of Inflammation and Aging at the University of Birmingham and co-lead of the STANDING Together project said:

“By establishing the correct database, STANDING Together ensures that ‘no one is left behind’ as we seek to harness the benefits of data-driven technologies such as AI. We have opened our Delphi study to the public to maximize our reach. This will help us ensure that the recommendations made by STANDING Together truly represent what matters to our diverse community.

Professor Alastair Denniston, Consultant Ophthalmologist at Birmingham University Hospitals and Professor at the Institute of Inflammation and Aging at the University of Birmingham, is co-lead on the project. Professor Denniston said:

“As an NHS doctor, I welcome the arrival of AI technologies that can help us improve the healthcare we provide – faster and more accurate diagnosis, increasingly personalized treatment and interfaces technologies that give greater control to the patient. But we also need to make sure these technologies are inclusive. We need to make sure they work effectively and safely for everyone who needs them.”

This is one of the most rewarding projects I have worked on, as it incorporates not only my great interest in using accurate validated data and my interest in good documentation to aid in discovery, but also the pressing need to involve minority and underserved groups in research. which benefits them. In this last group of course, there are women.”

Jacqui Gath, patient partner, STANDING Together project

The STANDING Together project is now open for public consultation, as part of a Delphi consensus study. The researchers invite members of the public, healthcare professionals, researchers, AI developers, data scientists, policymakers and regulators to help revise these standards to ensure they work for you and anyone you work with.


Journal reference:

Ganapathi, S., et al. (2022) Addressing biases in AI datasets through the STANDING together initiative. natural medicine. doi.org/10.1038/s41591-022-01987-w.

Sharon D. Cole