It’s time to shed light on secret government algorithms

Photo: Guy Corbishley / Alamy Stock Photo

Government by algorithm is on the rise. Automation is used in settings ranging from policing to welfare to immigration. So why don’t people know more? The answer is simple: ministries keep us in the dark.

When human officials make decisions that impact our rights and prerogatives, the process they must follow is clearly defined by legislation, case law, and published policies and guidelines. But the same transparent rules and constraints are not applied when ministers and civil servants delegate their decision-making to robots.

Standards of procedural fairness have developed in this country over centuries and are now well established in public law: a person affected by a government decision should know in advance how the process will unfold, and therefore how to prepare and participate. And once the decision is made, a concerned person should be able to know what this decision is. Courts have recognized that there are also good arguments to justify a decision. Obviously, when a decision concerns a person’s rights, there is a particularly strong duty to explain it.

Giving reasons helps decision-makers show that they are acting fairly, rationally and for legitimate purposes. So, at Public Law Project, we call on the government to ensure that the principles of procedural fairness are not abandoned as algorithmic decision-making is increasingly embraced.

The Public Law Project (PLP) looked at many algorithms used by government departments, in particular the Home Office and the Department for Work and Pensions. The algorithms we studied do not follow the government’s own rules for how decisions should be made. In our experience, most people don’t know how the system works: the rules or criteria for the algorithm are not published, unlike the equivalent policies or guidelines used in an analog world. They also do not know what information was taken into account, nor the reasons for the decision. Most people don’t even know that an algorithm was used to decide their case. In our view, this contravenes fundamental rules of procedural fairness and undermines good governance.

One such algorithm is the The Home Office’s tool to detect alleged “fictitious marriages”— marriages for the purpose of circumventing immigration rules. The information we have gathered suggests that when a foreign national intends to marry, their information is processed by the Home Office and assessed against eight risk factors unknown to the individual. Couples who match these risk factors are flagged for investigation. Home Office inquiries are unpleasant and intrusive. Those involved don’t know they were identified by an algorithm, let alone what criteria they were assessed on or why they are being investigated.

What is most troubling is that the Home Office refuses to provide clear information about its fake marriage algorithm, including in response to our freedom of information requests. Keeping the workings of the algorithm secret is not sustainable. The opacity around this tool, and many others currently in operation, violates fundamental norms of public law. We are pursuing access through an appeal to the Information Commissioner’s Office. But given the proliferation of government algorithms, transparency obligations must urgently apply at all levels.

The Cabinet Office is currently piloting an optional algorithmic transparency standard. Lifting the veil on automated systems is essential to ensure that robots operate in a fair and non-discriminatory way. But PLP is seriously concerned that the standard does not go far enough.

Why? There are two major problems. First, participation is not compulsory. Under the Cabinet Office plan, the Home Office could choose to keep details of its fake marriage algorithm out of public view. PLP calls for transparency obligations to be enshrined in law. If we don’t, we believe that the use of algorithms will remain largely opaque.

Second, even if the Home Office chooses to participate, the standard does not ask for sufficient detail. We would get high level information, but not enough about the nuts and bolts. And people really need to understand the cogs and bolts if they’re going to challenge automated systems when they go wrong. It’s just simple, standard procedural fairness.

This procedural fairness is a key element of the rule of law. There is no logical reason to abandon it now, and a high democratic price will be paid if we do. Failure to apply the same high standards to robots that we apply to human decision-makers leads us into a legal mess.

Sharon D. Cole