In recent years, the British government (the “Government”) has grown to recognize that the increasing prevalence of AI in the public and private sectors has led to the inevitable impact of its behavior on the British public. In many cases, this has been welcomed, acknowledging that AI offers a number of useful uses and opportunities, such as identifying criminal financial behavior or tax evasion. However, as with many technologies, this is a double-edged sword. The integration of AI and technology into everyday life, especially in sensitive areas, such as healthcare, has often been met with skepticism and distrust from members of the public.
It is not without reason. A cursory search of AI in its past application in these sectors points to a number of issues: inappropriate use of data, biased algorithms, and inaccurate results. In order to address these concerns earlier, but by no means irrelevant, as well as to seek to act as a world leader in the field of AI, the government has made a major effort to develop a sophisticated digital environment based on trust and transparency. A notable example is the creation of a roadmap and similar initiatives to create an effective AI insurance ecosystem.
More recently, the government has focused on leading the charge in developing the standards and use of AI around the world. In January 2022, the government announcement a pilot, in partnership with the British Standards Institutethe Alan Turing Instituteand the National Physics Laboratory, which would seek to shape how organizations and regulators set technical standards for AI globally. Following this announcement, the government announcement that the National Health Service (the “NHS”) would launch another world-class pilot project in the field of AI.
The announcement stems from a development over discussion between the NHS (and their NHS AI Lab) and the Ada Lovelace Institute (the “Institute”), aiming to create a framework to assess the impact of medical AI. In this pilot, the NHS will act as the first healthcare organization to test algorithmic impact assessments (“AIA”) within their organization. The main objective of work, among others, is to act as a means of combating health inequalities and prejudices in the systems that underpin health and care services, thus removing some of the mistrust environment that these systems maintain within the health sector.
What exactly are algorithmic impact assessments?
Best described by the Institute, AIAs are a “tool used to assess the possible societal impacts of an AI system before the system is used”. Their aim, among other things, is to create greater accountability and transparency for the deployment of AI systems. By doing so, it is hoped that trust in AI will be built, while mitigating the potential for harm to specific categories of people.
In many ways, AIAs resemble common impact assessment tools that exist today. A prime example is the Data Protection Impact Assessment, which assesses and strives to minimize the impact that data processing technologies and policies would have on the privacy rights of an individual. person. Similarly, an AIA allows organizations to conduct an impact assessment of the potential risks and outcomes that may be produced when using the data provided to it, whether non-sensitive such as rates hospital admission, or more sensitive such as gender, ethnic origin or family history of illnesses.
By recognizing the potential risks caused by integrating certain AI programs, organizations can then modify their system in the early stages of development, as well as before implementation on a broader level.
An example and its user guide, implemented by the NHS AI laboratory in partnership with the Institute, are accessible here.
Why is this such an important step?
Piloting AIAs in a setting such as the NHS is an important step due to the fact that they are not widely used in the public or private sector. As noted above, the pilot acts as the first instance that a public health agency has sought to integrate them within its organization. There is therefore little consistency or uniformity in approach and there is no guarantee that many of them will produce the desired result. By virtue of this, there is also no guarantee that many of these AIAs will be effective in reducing the risk of bias or inadvertent harm to the owners of the data being processed. This pilot is an opportunity to provide the framework created by the Institute with a sandbox for rigorous testing and feedback, which can then be used to modify their main proposal in the future.
Despite the newness of AIAs within the health service, it should be noted that approved AIA models already exist and are being used in other settings. In 2020, the Treasury Board of Canada Secretariat Directive on Automated Decision Making set up a standard form, aimed at helping Canadian public servants manage the standardization of AI in the public sector. Along with the emergence of more rigid assessment tools, an increase in the creation of more flexible assessment frameworks has also begun, such as the IEEE AI Standards or UN Guiding Principles on Business and Human Rightswhich should be used alongside an organization’s existing code of ethics.
The implementation of AIAs within the NHS therefore provides an invaluable opportunity to further determine the effectiveness of AIAs and to continue to fill the gaps in knowledge and data that currently affect their use. If this pilot is successful enough, it is likely that other pilots in other public and private sector areas will develop, continuing to drive the UK forward in creating a holistic approach to AI. and standards.
The NHS Pilot
The NHS is set to pilot this assessment in a number of initiatives and will also be used as part of the data access process for the National Covid-19 Chest Imaging Database and the National Medical Imaging Platform.
The goal of this pilot project is to help researchers and developers assess the potential risks and biases of AI systems when processing data from patients and members of the public before they can access these resources. As noted in their announcement, while artificial intelligence has the potential to help health and care workers deliver better care, it can also exacerbate existing health inequalities if certain biases are not properly addressed. . For example, the Institute notes that AI systems, due to training biases and available data, among other factors, have been less effective in diagnosing skin cancer in people of color. By involving developers and impact assessments at an early stage, patients and healthcare professionals can get involved early in the use and development of these technologies. By doing so, it is expected that instances of polluted or biased data will decrease and the overall patient experience will improve.
The announcement goes on to note that this pilot project complements the ongoing work of the ethics team within the NHS AI Lab in ensuring that training data and systems testing deliver results that reflect the diversity and inclusiveness, creating a much more useful set of training data and an overall increase in public trust.
It is hoped that through the successful implementation of this pilot project, AIAs can be used more widely to increase transparency, accountability, and legitimacy in the use of AI in healthcare.
Innovative: an innovative framework to assess the impact of medical AI
Perhaps best brought out by Lord Clement-Jones in a number of his discussions of expectation Health and Care Bill, AI in health care (and more broadly in the public sphere) will not be successfully exploited unless the public is convinced that their health data will be used ethically, at their fair value and used for UK’s biggest health care benefit. Although the pilot project is certainly not the last step in achieving this goal, it is certainly a positive step in establishing the confidence that AI can perform for the benefit of patients and practitioners.
Although this particular pilot of the framework is to be carried out by the NHS, the Institute notes that its proposal has been developed to assist software developers, researchers and policy makers in their creation and implementation in a number of health sector scenarios. One such area that would benefit from the implementation of these protocols is medical devices. The use of AI in sophisticated surgical machinery, testing equipment, and diagnostic tools offers unparalleled potential in delivering accurate and timely healthcare. They do, however, suffer from the same skeptical approach and lack of trust that technology faces when it comes to a service that we have come to accept requires a human touch to trust. If the pilot is deemed successful in their isolated environment, the expansion of AIA pilots into medical device procedures may well help increase overall support for their use and allow members of the public to see that their data and care is fully taken into account.
It should be noted, however, that given the broad applicability of the framework created by the Institute, its application does not stop with healthcare. Instead, the framework can form the basis for a number of sectors and organizations. It therefore serves as a useful resource for all industry participants to determine how to create their own AIAs to implement throughout the design and incorporation stages of AI.