Bill to regulate use of algorithms under consideration by DC Council

A bill currently being considered by the Council for the District of Columbia titled “Stop Discrimination by Algorithms Act of 2021” (“Bill”), would impose limits and requirements on companies that use algorithms to make credit and eligibility decisions, including decisions about whom a company sends advertising or marketing solicitations. The bill was tabled in December 2021 and a public hearing on the bill was held on September 22, 2022.

The bill would apply to a “covered entity,” which includes any legal entity that “either makes algorithmic eligibility determinations or algorithmic information availability determinations or relies on algorithmic eligibility determinations or availability determinations of algorithmic information provided by a service provider” and meets one of the following criteria:

  • Owns or controls personal information about more than 25,000 district residents;
  • Has over $15 million in average annualized gross revenue for the 3 years preceding the most recent fiscal year;
  • Is a data broker, or other entity, that derives at least 50% of its annual revenue from collecting, collating, selling, distributing, providing access to, or maintaining personal information, and some of the personal information relates to a district resident who is not a customer or employee of the entity; Where
  • Is a service provider (i.e., an entity that makes algorithmic eligibility determinations or algorithmic information availability determinations on behalf of another entity).

The bill defines an “algorithmic eligibility determination” as “a determination based wholly or substantially on an algorithmic process that uses machine learning, artificial intelligence, or similar techniques to determine eligibility. an individual or the ability to access important life opportunities.” An “algorithmic information availability determination” is defined as “a determination based in whole or in part on an algorithmic process that uses machine learning, artificial intelligence, or similar techniques to determine an individual’s receipt of advertising , marketing, solicitations or offers for a significant life opportunity. The term “significant life opportunities” is defined as “accessing, approving, or offering credit, education, employment, housing, a place of public accommodation [as defined by D.C. law]or insurance.

The bill would prohibit a covered entity from making an algorithmic determination of eligibility or an algorithmic determination of the availability of information “on the basis of race, color, religion, national origin , sex, real or perceived gender identity or expression of an individual or category of individuals, sexual orientation, family status, source of income or disability of an manner that segregates, discriminates against or otherwise renders important life decisions inaccessible to an individual or class of individuals.A practice that has the effect of violating this prohibition would be considered an unlawful discriminatory practice.

The requirements the bill would impose on covered entities include:

  • Provide notice to an individual prior to making the first algorithmic availability determination regarding that individual, with notice to include certain specified information about how the Covered Entity uses Personal Information to make algorithmic eligibility determinations or algorithmic determinations of information availability;
  • Provide a Disclosure containing Specified Information to a Person about whom the Covered Entity takes adverse action based in whole or in part on the results of an algorithmic eligibility determination;
  • Perform an annual audit of the Covered Entity’s Algorithmic Eligibility Determination and Algorithmic Information Availability Determination practices to determine whether the practice results in unlawful discrimination and to analyze risks of disparate impacts; and
  • Submit an annual report containing the results of the audit to the DC Attorney General and including certain specified information such as “the data and methodologies the Covered Entity uses to establish the Algorithms”.

The bill provides for enforcement by the DC Attorney General and a civil penalty of up to $10,000 for each violation. It also creates a private right of action and authorizes a court to award at least $100 and up to $10,000 per violation or actual damages, whichever is greater.

The bill has drawn criticism from credit industry trade groups such as the American Financial Services Association. Among other criticisms, trade groups say the bill would impose difficult, if not impossible, compliance burdens on lenders, resulting in less access to credit and more expensive loans. They also argue that the bill is unnecessary and duplicates existing laws and regulations such as the Equal Credit Opportunity Act and the Gramm-Leach-Bliley Act. (In May 2022, the CFPB issued a circular regarding adverse action requirements in algorithm-based credit decisions.)

The White House Office of Science and Technology Policy recently released a “Blueprint for an AI Bill of Rights” in which it identified five principles “that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The plan was praised by DC Attorney General Karl Racine to incorporate a large part of [the Bill.]However, like the bill, the Blueprint has drawn criticism from industry trade groups who fear the Blueprint could result in industry-wide mandates. Politico reported that the U.S. Chamber of Commerce’s AI policy officer raised the possibility of many federal agencies issuing regulations based on the Blueprint, and state and local governments enacting laws.” imitators”. The Chamber also sent a letter to the Director of the Office of Science and Technology Policy expressing concerns about the Master Plan, including that it was developed without sufficient stakeholder input and confuses artificial intelligence with data privacy. .

Sharon D. Cole