Algorithms, AI and Proxy Discrimination: Insurance Regulators Continue to Examine Private Passenger Motor Insurance; Homeowners and life insurance

As insurers and Managing General Agents (MGAs) increasingly use artificial intelligence (AI), machine learning (ML) and various forms of big data in product design decision making, marketing, pricing/underwriting, fraud detection and claims processing, the National Association of Insurance Commissioners (NAIC) and state regulators continue to take a closer look at how insurers and other regulated entities are using new technologies and data sources, as well as how regulators should establish a framework to monitor these rapidly evolving tools.

Insurers and licensees should also keep tabs on federal developments, including the US Department of Justice (DOJ) and Meta’s regulation agreement resolving allegations that Meta’s algorithms discriminate against Facebook users based on characteristics protected by the Fair Housing Act; the Federal Trade Commission (FTC) report the warning from the US Congress about the use of AI to combat online problems; and the FTC rule making consideration to “ensure that algorithmic decision-making does not result in unlawful discrimination”. (See also federal legislative proposals, including the US Privacy and Data Protection Act (HR 8152) and the Health Equity and Accountability Act 2022 (HR 7585).)

BIG DATA AND ARTIFICIAL INTELLIGENCE WORKING GROUP (H)

The NAIC Innovation, Cybersecurity, and Technology (H) Committee, the first new “letters committee” created by the NAIC in approximately 20 years, consists of several working groups, including the subtitled (the working group). The working group met on July 14, 2022 to relay a preliminary analysis of industry responses to its survey on artificial intelligence and machine learning (AI/ML). The original investigation, which focused on private passenger auto insurance, was conducted under the auspices of nine states and was limited to large auto insurers (more than $75 million in gross written premiums per year), reported that nearly 90% of respondents apply AI/ML to one or more business functions. Insurers reported that AI/ML is most frequently used in claims, followed by fraud detection, marketing, underwriting, underwriting, and loss prevention, in that order. Full survey results could be released at the NAIC’s next summer national meeting in Portland, Oregon, when the task force meets on August 10, 2022. Regulators plan to build on the initial survey on IA/ML and will conduct surveys on home and life insurance lines later this year. In addition to its investigative work, the task force also assesses the activities of third-party data and model providers and develops a recommended regulatory framework to monitor and oversee industry’s use of these providers.

ACTIVITY OF STATE REGULATORS

Washington, DC, Hearing

On June 29, 2022, the District of Columbia’s Department of Insurance, Securities, and Banking (DISB) held a audience with stakeholders to discuss the development of a process by which the DISB will collect and assess data related to unintentional bias in private passenger motor insurance. The data will be used to inform the DISB’s Diversity, Equity and Inclusion initiative on insurers’ use of non-determinants in underwriting and pricing. DISB Commissioner Karima Woods explained that 27 insurance groups underwrite private passenger auto insurance, including $387 million in premiums, in the District of Columbia, with the top five groups underwriting 85% of premiums.

California Bulletin 2022-5

On June 30, 2022, Insurance Commissioner Ricardo Lara issued Bulletin 2022-5 (the Bulletin) “to remind all insurance companies [including nonadmitted/surplus lines insurers] and licensees of their obligation to market and issue insurance, charge premiums, investigate suspected fraud and pay insurance claims in a manner that treats all people in a similar situation. The Bulletin discusses the need “to avoid bias or discrimination, conscious and unconscious, that can and often does result from the use of artificial intelligence, as well as other forms of ‘big data’. of purportedly neutral individual characteristics as an indicator of prohibited characteristics that result in racial bias, unfair discrimination, or disparate impact.

The Bulletin explains that “before using any data collection method, fraud algorithm, pricing/underwriting or marketing tool, insurers and licensees should exercise due diligence to ensure full compliance of all applicable laws” and “provide transparency to Californians by informing consumers of the specific reasons for any adverse underwriting decisions. The Bulletin also directs “all persons engaged in the insurance industry in California”, in the broadest terms possible, “to review all applicable laws and to train its personnel in the proper application of all laws applicable to insurance”. Finally, the Bulletin asserts that the California Department of Insurance “reserves the right to audit and review all business practices of Insurers, including marketing criteria, programs, algorithms and models, Tarifica tion, claim and underwriting of an insurer”.

Connecticut requires insurers to certify compliance annually

Like before reported, the Connecticut Department of Insurance (CID) has also issued a reminder (notice) stating that the state expects insurers and other licensees to fully comply with applicable federal and state discrimination laws. Although not explicit, the CID requires annual certification by national insurers; the first such certification is due by September 1, 2022. The notice reminds insurers that, as noted in the California Bulletin above, CID will include anti-discrimination compliance in insurers’ periodic reviews. The notice also reminds insurers that with respect to AI and ML, the CID will review the technology, whether developed in-house or purchased from third parties. Regarding the scope of the CID reviews on the use of big data, in particular, the entire big data ecosystem – from social media to the Internet of Things – will be covered.

KEY POINTS TO REMEMBER

The above activity provides examples of increased scrutiny by insurance regulators regarding the use of new technologies, including AI, ML and Big Data, and determination by insurance regulators to confirm the compliance with applicable state and federal anti-discrimination standards. Industry will continue to be invited (if not required) to provide survey responses as regulators examine the use of AI, ML and Big Data across various business categories and for various other purposes. Regulators are beginning to notify that periodic reviews of financial practices and market conduct will include these tools, just as they will likely assess deposits using algorithms developed with these tools. From an overall compliance perspective, all insurers and licensees should consider whether staff training is appropriate, especially in states like California and Connecticut that will require the filing of periodic compliance certifications.

McDermott’s fully integrated Insurance Transactions and Regulation team represents a broad and diverse base of innovative clients in the insurance industry. Please feel free to contact the authors of this article or your usual McDermott contact with any questions.

Sharon D. Cole