You are fired! Can we trust algorithms to decide who gets fired and who doesn’t?

(Image by Gerd Altmann from Pixabay)

In a new twist on the now infamous 2013 Oxford University studywhich predicted that nearly half of all American jobs would be automated over the next two decades, 60 Facebook contractors learned last month that they had was fired randomly by an algorithm.”

Reports claimed that an executive from Accenture, which provides recruitment services to Meta, briefed Austin-based workers on the situation via video call.

Russian company Xsolla, which provides payment processing for the gaming industry, also laid off 150 employees in August 2021, apparently due to slowing growth, after an algorithm identified them as “unengaged and unproductive”.

Estée Lauder, meanwhile, reached an out-of-court settlement with three makeup artists in March this year after firing them following a video interview to reapply for their jobs. An algorithm evaluated the video and judged that the women’s performance was not comparable favorably with that of other employees.”

So, how common is this type of behavior among employers, and could it be a sign of things to come? According to Kate Benefer, a partner in the employment team at law firm RWK Goodman:

“Using technology to fire people is a relatively new idea and it’s not common. This type of approach is still in its infancy, with AI tending to be more commonly used on the recruiting side to filter candidates.

Understand the legal risks

But using technology in this way carries a number of legal risks. The first well-documented concerns the potential for discrimination based on possible unconscious biases programmed into systems by developers and data scientists. A classic example of this is the internal AI-based recruiting tool built by Amazon, which irreparably discriminated against female candidates and was subsequently canned. ”

Another legal risk, in Europe at least, relates to the idea of ​​fairness. In this context, the idea of ​​holding meaningful and two-way consultations during a dismissal process and being clear about the criteria used for decision-making is essential. Benefer explains:

If disciplinary allegations are made against individuals, it means they can respond appropriately. But if, as in the Estee Lauder case, decisions are based on your reactions and your facial expression, it’s much harder to consult about it. So it becomes more one-sided, which is problematic from a fair dismissal perspective.

This means it is important for employers to clearly state their reason for laying off staff, as it makes such decisions ‘harder to challenge’. However, Benefer adds:

If you say you’re firing someone because the computer said so, it’s hard to defend claims based on meaningful consultation or fairness, especially in a pooling situation where some people are selected to be fired and some are not – and especially if you end up getting rid of more women than men, for example. The question is how can you say “I’m not biased” if you don’t know whether the technology is or not?

A third risk, still in a European context, is that of complying with the European Union’s General Data Protection Regulation. As Benefer points out:

Individuals have the right to object if they do not want decisions to be made about them without human intervention and if they prefer to have their data processed by a person. So you might find individuals who start telling employers that they don’t want their data interpreted by machines.

But it’s equally imperative that organizations be open with staff about whether or not they’ve chosen to automate their decision-making. Benefer explains:

I guess when many companies wrote their privacy policies they said they would not process personal data using algorithms which means if they did now they would be reacting contrary to those policies . It’s a relatively easy situation to rectify – just be transparent about the changes, but I suspect many organizations haven’t focused on the data angle yet.

Ethical and reputational challenges

The algorithmic firing of workers also creates a series of ethical and reputational challenges. At the heart of the matter is how and what the technology is used for, says Bill Mitchell, director of policy at BCS – The Chartered Institute for IT. He says:

You need to verify the provenance and integrity of the data, and that the system is measuring what needs to be measured. From an ethical point of view, there are two ways to use AI systems. The first is to treat employees in a way that means they feel helped and supported to do their job better – coaching apps are a great example of this. The second measures the quality of employees in their work – for example, if they type or answer calls quickly enough. It’s more of a micromanagement approach and exerting more control over them.

The point here is that automating bad management practices won’t make it better or make employees feel more valued. As a result, he says, it is important to:

Think carefully before using AI and don’t make the mistake of just throwing IT on the problem. For example, if you plan to use it to monitor whether staff are at their desks, it might be a good idea to assess your management approach.

But there is also the perspective of the situation to consider. As Benefer points out:

Legal issues aside, it doesn’t seem nice to be fired by computer with no apparent human involvement. Dismissal law is important, but from an individual’s perspective, a lack of human intervention makes things look like things weren’t handled properly. It is therefore more likely that there will be complaints and annoyances as it seems totally impersonal, and it is not good for the reputation of companies known to treat their employees in this way. If you are focused on engaging, attracting and retaining people, this type of treatment will deter them from joining, so it is important to take this into account when deciding how much of this type of approach should be used. .

But Alistair Sergeant, chief executive of digital transformation consultancy Equantiis, believes there are both pros and cons to using algorithms for decision-making in this context. On the upside, he says:

It is quite common to ask organizations to give back an overall percentage to the budget, which includes headcount. It is not a moral examination that questions the impact on people. The focus is on what can be saved from the bottom line. So you could argue that layoffs would be a lot rougher without the use of algorithms, and organizations can use data more effectively to make informed decisions.

The importance of context

On the other hand, the sergeant acknowledges that:

There are always three parts to an opinion – my point of view, your point of view and the piece in the middle that brings things together. Xsolla, for example, made its decisions based on people’s commitment. But if they are not engaged, whose fault is it? Is it the fault of the staff member or the manager? This kind of context is not taken into account by algorithms, but there are huge moral implications if decisions are made based on data without getting all sides of the story. This breaks trust quite considerably without understanding why, so it comes down to the human values ​​that you want to demonstrate in your organization.

A key consideration here is to look at the root causes of any challenge, performance or otherwise, rather than just the symptoms. But it’s also about using the data “to paint a picture rather than to decide the future,” says Sergeant. Benefer undertakes:

AI can sit behind a decision, but it would help on the employee engagement side and wouldn’t seem as negative if you ran the program to make the tentative selection and then people made the final decision. It is certainly less risky.

Unfortunately, there is currently no non-technical guidance or ethical best practice to help here when introducing AI systems. In legal terms, there has also been no test case to establish what should be considered appropriate behavior or not. And as Benefer says:

The problem will not go away. Everyone is looking for technology to make things easier so people won’t stop using it – although I think it will be quite scarce for some time due to competition in the job market, this which means companies won’t want to be seen as abusive. . But the more normal this approach becomes, the more the risks begin to change as it simply becomes the way things are done.

my catch

Ethical, moral, and reputational issues aside, this kind of situation is something the AI ​​industry needs to assess, explore, and begin to take some responsibility for, for its own good as well as that of society. in general. As Mitchell says:

What worries me is that if this kind of practice becomes widespread, what will it do to public confidence? If AI starts to be viewed as a hostile micro-management device and there is a situation of mistrust surrounding its use, this will undermine any attempt at adoption.

Sharon D. Cole