DC AG Introduces “Algorithmic Discrimination Stopping Act” | Troutman pepper
On December 8, District of Columbia Attorney General Karl A. Racine transmitted the “Stop Discrimination by Algorithms Act of 2021 (Act)” for consideration and enactment by the Council of the District of Columbia. Although discrimination of various types is prohibited by various federal and DC laws, this bill would apply to a wider range of industries; impose positive requirements, including an annual self-audit and reporting requirement; and provide enforcement authority to the Office of the Attorney General for the District of Columbia (AG). The bill also includes a private cause of action, penalties of up to $10,000 per violation and punitive damages. In his Press releaseAG Racine made pointed comments on algorithms and artificial intelligence:
“Not surprisingly, algorithmic decision-making computer programs have been convincingly proven to reproduce and, worse, exacerbate racial and other unlawful biases in essential services that all residents of the United States need to function in our precious capitalist society. This includes obtaining a mortgage, car financing, student loans, any credit application, health care, admission assessments in educational institutions, from elementary school to the highest level vocational training, and other essential points of access to opportunities for a better life. This so-called artificial intelligence is the engine of algorithms that are, in fact, much less intelligent than they are, and more discriminatory and unfair than big data wants you to know. Our legislation would put an end to the myth of the inherently egalitarian nature of AI.
The law, if passed, would prohibit covered entities to make a algorithmic determination of eligibility or one algorithmic determination of information availability on the basis of race, color, religion, national origin, sex, gender identity or expression, sexual orientation, marital status, source income or disability, real or perceived, of an individual or class of individuals in a way that segregates, discriminates, or otherwise important life opportunities unavailable for an individual or a category of individuals. In addition, any practice that has the effect or consequence of violating the above prohibition would be considered an unlawful discriminatory practice.
The law also requires that each covered entity:
- Audit its algorithmic eligibility determination and algorithmic information availability determination practices to determine, among other things, whether these practices are discriminatory;
- Annually send a report of the audit mentioned above to the office of the AG;
- Send a Notice of Adverse Action to Affected Persons if the Adverse Action is based in whole or in part on the results of an Algorithmic Eligibility Determination;
- Develop a notice detailing how it uses personal information in algorithmic eligibility determinations and algorithmic information availability determinations;
- Send the aforementioned notice to data subjects prior to its first determination of the availability of Algorithmic Information and make the notice available on a continuous and prominent basis; and
- Require Service Providers by written agreement to implement and maintain measures to comply with the law if the Covered Entity relies in whole or in part on the Service Provider to make an algorithmic determination of eligibility or an algorithmic determination of the availability of information.
The AG’s office would have the power to enforce the law, including the ability to impose civil monetary penalties of $10,000 for each violation. For individual plaintiffs, the law includes a private right of action, where injured parties can recover up to $10,000 per violation. In addition, either action could result in the offending party paying punitive damages and/or attorneys’ fees.
The definitions in the Act are important:
Covered entity captures almost any individual or entity that makes algorithmic eligibility determinations or algorithmic information availability determinations, or relies on algorithmic eligibility determinations or algorithmic information availability determinations provided by a service provider , and which meets one of the following criteria:
- Owns or controls personal information about more than 25,000 DC residents;
- Has over $15 million in average annualized gross revenue for the three years preceding the most recent fiscal year;
- Is a data broker, or other entity, that derives 50% or more of its annual revenue from collecting, collating, selling, distributing, providing access to, or maintaining personal information, and some of the personal information relates to a DC resident who is not a customer or employee of that entity; Where
- Is a service provider.
Important Life Opportunities means accessing, endorsing or offering:
- a public accommodation, or
Algorithmic determination of eligibility means a determination based wholly or substantially on an algorithmic process that uses machine learning, artificial intelligence, or similar techniques to determine an individual’s eligibility or ability to access significant life opportunities .
Algorithmic determination of information availability means a determination based in whole or in large part on an algorithmic process that uses machine learning, artificial intelligence, or similar techniques to determine an individual’s receipt of advertising, marketing, solicitations, or offers for a important life opportunity.
In its current form, the law does not provide for any grace period when it is enacted. Instead, the law would take effect upon its publication in the DC Register.
The law proposed by the AG is another example of regulators seeking to combat the potential for discrimination in algorithms. In November 2021, House Financial Services Committee Chair Maxine Waters sent a letter to the heads of several federal regulators, asking them to monitor technology development in the financial services sector to ensure that no bias algorithmic does not occur (see our blog post here). In addition, CFPB Director Rohit Chopra remark in the past that “black box algorithms relying on personal data can reinforce societal biases, rather than eliminate them”.
We will continue to monitor developments related to the regulation of algorithmic models at the state and federal level.