EEOC Releases Guidance on Algorithms, AI, and Disability Discrimination in Hiring – Publications

LawFlash






May 20, 2022

The United States Equal Employment Opportunity Commission (EEOC) has released orientation May 12 discussing the application of the Americans with Disabilities Act (ADA) to employers’ use of algorithms and artificial intelligence (AI) in the hiring process. Produced as part of the Artificial Intelligence and Algorithmic Fairness initiative launched in October 2021, the guide reflects the agency’s growing interest in the use of AI by employers, including the machine learning, natural language processing and other emerging technologies in employment decisions.

The EEOC’s AI initiative is a key part of the agency’s efforts to advance its systemic work, according to an April 2022 report testimony Speaker Charlotte Burrows gave to the House Education and Labor Subcommittee on Civil Rights and Human Services. The goal of the initiative is to educate candidates, employees, employers and technology providers on the legal requirements in this area and to ensure that new hiring tools do not perpetuate discrimination.

These documents constitute the first substantial result of the initiative. The guidance provides key information on the EEOC’s thinking about these tools and potential priorities for application in the field in the future.

DEFINING AI AND ALGORITHMIC DECISION TOOLS

Definitions are crucial in this area due to the growth of various technologies and their increasing use at different stages of the employment process. The EEOC’s guidance provides expanded definitions of three key terms – software, algorithms and artificial intelligence – along with a discussion of how they can be used in the workplace. Although this document focuses on the ADA, it is expected that the EEOC will apply these definitions when analyzing the impact of the tools in other areas of employment discrimination, such as racial or gender bias.

The definitions used by the EEOC are quite broad. They define “software” as information technology programs that tell computers how to perform a given task or function. Examples of “software” used for hiring include resume filtering software, hiring software, workflow and analytics software, video interviewing software, and chatbot software.

“Algorithms” encompass any set of instructions followed by a computer to accomplish an identified end. This may include any formula used by employers to rank, assess, score or make other decisions about applicants and employees.

“Artificial intelligence” refers to a “machine-based system that can, for a given set of human-defined goals, make predictions, recommendations or decisions influencing real or virtual environments”. This covers machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and other autonomous systems used to make employment decisions or set criteria. allowing a human to make employment decisions.

The guidelines state that employers can use tools that include any combination of these three general terms. For example, an employer may use resume selection software that relies on an algorithm created by human design or an algorithm that is supplemented or refined by AI analysis of data.

POTENTIAL ADA VIOLATIONS OF EMPLOYER USE OF AI AND ALGORITHMIC TOOLS

The guidelines address three areas where an employer’s use of algorithmic decision-making tools or other technologies could violate the ADA:

  • Use of tools that unlawfully eliminate candidates or employees on the basis of a disability
  • Failure to provide reasonable accommodation with respect to tools
  • Use of tools that violate ADA restrictions on disability-related medical inquiries and examinations

Tools that illegally eliminate people with disabilities

The guidelines explain that a tool can “screen out” a person on the basis of their disability if the person’s disability prevents them from meeting the selection criteria implemented by the tool or results in a negative rating of the tool based on these criteria. If the individual loses an employment opportunity as a result, an ADA violation may occur. Examples include screens that automatically weed out candidates with significant gaps in their work history (which may be the result of a disability) or measure and assess physical or mental traits, such as speech patterns or the ability to solve certain games, which can be impacted by a handicap.

The guidance notes emphasize, importantly, that employers Maybe not rely on a vendor’s assessment that a tool is “unbiased” for validation purposes. These assessments may focus only on other protected characteristics, such as race or gender, and fail to properly assess impact on the basis of disability. Also, unlike other protected characteristics, each disability is unique in terms of the limitations it imposes. A general assessment of a tool is unlikely to cover all of the potential ways in which a disability may interact with that tool. Finally, a vendor rating may be invalid or poorly designed. As the ultimate decision maker, the employer is responsible for the results produced by the tool and has the responsibility to ensure legal compliance.

Duty to provide reasonable accommodation

The guidelines remind employers to consider reasonable accommodations for candidates or employees who require that they be assessed fairly or accurately by an assessment tool. This may include accessibility accommodations for people who have difficulty taking tests or using tools due to dexterity limitations or who require adaptive technologies, such as screen readers or captions coded, to apply effectively. This obligation applies to an employer even if he has outsourced the evaluation or operation of the tool to a third party or a supplier.

The guidelines further explain that the ADA’s reasonable accommodation requirement may require forgoing the use of these tools in certain situations. Artificial intelligence and algorithm tools are designed to measure an individual’s suitability for a particular position. Employers will need to consider requests for accommodation, including waiver, from candidates who are unable to meet the criteria used by a particular tool to measure suitability, but who are otherwise able to show that they can perform essential job functions. This is the case even when the tools are validated for certain traits. As stated above, the EEOC believes that the unique nature of each disability allows an individual to show that a generally validated screening still unlawfully excludes that individual on the basis of their particular limitations.

Disability-related medical inquiries and examinations

The guidelines also reaffirm that AI or algorithmic tools cannot involve unlawful disability-related medical investigations or examinations. The ADA prohibits employers from making disability-related inquiries or requiring medical examinations of applicants prior to an offer of employment. Once an offer is made, the employer may only make such requests or require such examinations if they are “job-related and consistent with business requirements”. The guidelines remind employers that an assessment or algorithmic decision-making tool that explicitly requests medical information from applicants or can be used to identify an applicant’s medical condition could violate the ADA. However, tools that assess general personal traits (such as personality tests) will generally not violate this prohibition if they are not designed to reveal a specific diagnosis or condition.

“PROMISING PRACTICES” TO PREVENT DISCRIMINATION

The guidance recommends several practices employers can use to reduce the risk of an AI or algorithmic tool violating the ADA. These include:

  • Continually assess whether a tool can weed out people with disabilities
  • Ensure tools are accessible to people with visual, hearing, speech or dexterity disabilities
  • Provide robust explanations to candidates or employees regarding the traits or characteristics measured by a particular tool, the methods it uses to measure those traits or characteristics, and disabilities, if any, that could potentially reduce an assessment or eliminate a individual
  • Clearly announce the availability of reasonable accommodations, including alternative formats, waivers and testing, for people with disabilities, as well as provide clear instructions for requesting such accommodations

These “promising practices” reinforce that the key to ADA compliance in this area will be gathering enough information to identify potential areas of bias and providing applicants with the resources to request other forms of disclosure. assessment if they believe a disability may prevent a fair or accurate assessment. .

Unfortunately, however, the EEOC does not provide much guidance regarding How? ‘Or’ What employers can assess these tools to detect possible disability-related biases.

LOOKING FORWARD

The EEOC is focused on the use of AI in employment, particularly in hiring, and further guidance on this topic is expected as a result of its AI initiative. Increased EEOC interest in discrimination complaints is also anticipated based on the use of these tools and renewed systemic focus on this area.

The EEOC is just one of many regulators interested in applying these tools. Fair Jobs and Housing Council of California published draft regulations regarding employers’ use of “automated decision systems” in March 2022. In November 2021, New York City passed a straight requiring annual “bias audits” for “automated decision systems” used in hiring and several other jurisdictions are being actively considered as measures in this regard.

Employers will need to closely monitor developments in this area given this increased regulatory activity. They should also carefully evaluate the ways in which existing or proposed tools may create the risk of an ADA violation based on EEOC guidelines. This will only grow in importance as employers rely more on these methods to find and select the best candidates for positions in a tight labor market.

CONTACTS

If you have any questions or would like more information about the issues discussed in this LawFlash, please contact one of the following Morgan Lewis attorneys:

washington d.c.
E. Pierce Blue
Jocelyne R. Cuttino
Sharon Perley Masling

philadelphia cream
Michael S. Burkhardt
W. John Lee
Larry L. Turner

Silicon Valley
Melinda S. Riechert
Kannan Narayanan

New York
Ashley J. Hale
Douglas T. Schwarz
Samuel S. Shaulson
Kenneth J. Turnbull

Chicago
Jonathan D. Lotsoff

Sharon D. Cole