Workplace Algorithms – People Not Processes

Increasing use of algorithms by employers

The BBC Three documentary “Computer Says No” recently explored the use of facial analysis in automated recruitment software and highlighted some of the potential risk areas for employers in using algorithms ( also known as people analysis) in the workplace. Although automated recruitment has been around for a decade, its prevalence has increased sharply during the pandemic. The avid interest in this software is understandable; Algorithmic management presents new opportunities to improve workplace outcomes, but, as the documentary illustrates, it also brings new concerns and risks for workers. The term “algorithmic management” was first used to describe on-demand economy platforms such as Uber using software algorithms to enable worker assignment and rating. “People analytics” is the use of statistical tools such as algorithms, big data, and artificial intelligence (AI) to manage workforce planning, talent management, and operational management.

The opportunities and concerns presented by the use of these statistical tools were discussed in a recent report titled My boss the algorithm: an ethical look at algorithms in business prepared by Patrick Briône for ACAS. This report examines how algorithms can be used in the workplace, their unintended consequences, and how a responsible employer should approach their use.

Main uses of algorithms at work

There are three main areas of use of algorithms in the workplace. The first is algorithmic recruitment which may include resume screening, psychometric testing, automated interviewing, and facial recognition and expression analysis. The second is algorithmic distribution of tasks which automates the assignment of tasks or shifts, most commonly used in the retail and hospitality industries. The third is algorithmic performance monitoring and evaluation workforce where the software is designed to collect employee data and pass it on to managers who can use it to make decisions about compensation, promotion or firing.

Benefits and risks of algorithmic management tools

The ACAS report explains that there are two clear benefits to offer; the main appeal of algorithmic management is speed and the second is a new understanding of work behavior enabling new solutions to workplace problems. However, there are a number of risks to be weighed against these benefits, such as increased surveillance and scrutiny, a potential lack of transparency, risks of concealing and embedding bias and discrimination, and a lack of responsibility. The two most obvious areas of risk are the threat of increased management control without the corresponding workforce agreement (particularly in the areas of performance monitoring and control) and the danger of creating a dehumanized system run by a machine.

Many UK companies are now buying algorithmic management tools as “off-the-shelf” packages from US-based companies, which have much looser regulations around monitoring, data protection and security. consent. This means that some of these tools may not be legally suitable for a UK workplace where automated decisions about data give rise to various data privacy considerations, including the requirement to bring it to attention of employees (and other data subjects) in data privacy notice. As for unintended consequences, we heard that some automated and algorithm-based recruiting technologies have built-in racial biases and can also unfairly disadvantage candidates with neurodiversity. The BBC Three documentary also showed us that candidates can learn how to optimize their CVs in order to “cheat” the system.

How should a responsible employer approach the use of algorithms in the workplace?

Recommendations made by the ACAS report include the need for agreed standards on the ethical use of algorithms around bias, fairness, oversight and accuracy. Employers are advised to engage in early communication and consultation with employees on the implementation of new technologies to ensure they improve workplace outcomes. The report also recommends that the benefits of algorithms be shared with workers (as well as employers).

There are three key tips for employers:

  • First, algorithms should work alongside human supervisors, but not replace them, and human managers should always have final responsibility for all decisions in the workplace.
  • Second, line managers should receive training on how to understand the algorithm in question and how to use it correctly.
  • Finally, there should be greater transparency for employees (and applicants) about when algorithms are used and how they can be challenged.

It will be interesting to see the government’s response to the recent Report of the Commission on Racial and Ethnic Disparities which addresses the use of AI in recruitment processes and automated decision-making. A white paper is also expected to be released later this year, discussing how to combat potential racial bias in algorithmic decision-making. The Equality and Human Rights Commission (EHRC) should advise on the safeguards needed to ensure that technological advances do not have a disproportionate impact on ethnic minority groups and issue guidance explaining how to apply the Algorithmic Decision-Making Equality Act 2010.

In the end, it’s the data and how you use it that matters. With increased access to data, we need an increased commitment to transparency to show it is not being misused. To avoid dehumanizing their workforce, employers should focus on people, not processes.

Sharon D. Cole