Algorithms, AI and Proxy Discrimination: Insurance Regulators Continue to Examine Private Passenger Auto Insurance; Homeowners and Life Insurance Up Next

Overview


As insurers and managing general agents (MGAs) increasingly use artificial intelligence (AI), machine learning (ML) and various forms of Big Data in decision-making related to product design, marketing, rating/underwriting, fraud detection and claims handling, the National Association of Insurance Commissioners (NAIC) and state regulators are continuing to take a closer look at how insurers and other regulated entities are using new technology and data sources, as well as how regulators should establish a framework to monitor these rapidly evolving tools.

Insurers and licensees should also keep an eye on federal developments, including the US Department of Justice (DOJ) and Meta’s settlement agreement resolving allegations that Meta’s algorithms discriminated against Facebook users based on characteristics protected under the Fair Housing Act; the Federal Trade Commission’s (FTC) report to US Congress warning about the use of AI to combat online problems; and the FTC’s rulemaking consideration to “ensure that algorithmic decision-making does not result in unlawful discrimination.” (See also federal legislative proposals including the American Data Privacy and Protection Act (HR 8152) and the Health Equity and Accountability Act of 2022 (HR 7585).)

In Depth


BIG DATA AND ARTIFICIAL INTELLIGENCE (H) WORKING GROUP

The NAIC’s Innovation, Cybersecurity and Technology (H) Committee—the first new “letter committee” established by the NAIC in roughly 20 years—consists of several working groups, including the captioned (the Working Group). The Working Group met on July 14, 2022, to relay a preliminary analysis of industry responses to its Artificial Intelligence and Machine Learning (AI/ML) Survey. The initial survey, which focused on private passenger auto insurance, was conducted under the auspices of nine states and was limited to larger auto insurance writers (in excess of $75 million gross written premium annually), indicated that almost 90% of respondents are applying AI/ML to one or more business functions. Insurers reported that AI/ML is used most frequently in claims, followed by fraud detection, marketing, rating, underwriting and loss prevention, in that order. Full survey results may be released during the upcoming NAIC Summer National Meeting in Portland, Oregon when the Working Group meets on August 10, 2022. Regulators plan to leverage the initial AI/ML survey and will conduct surveys on homeowner and life insurance lines later this year. In addition to its survey work, the Working Group is also evaluating the activities of third-party data and model vendors and is developing a recommended regulatory framework for monitoring and overseeing the industry’s use of said vendors.

STATE REGULATORS’ ACTIVITY

Washington, DC, Hearing

On June 29, 2022, the District of Columbia Department of Insurance, Securities & Banking (DISB) held a hearing with stakeholders to discuss developing a process through which DISB will gather and evaluate data related to unintentional bias in private passenger auto insurance. The data will be used to inform DISB’s diversity, equity and inclusion initiative on insurers’ use of non-driving factors in underwriting and rating. DISB Commissioner Karima Woods explained that 27 insurance groups are writing private passenger auto insurance, comprising $387 million in premiums, in the District of Columbia, with the top five groups writing 85% of the premiums.

California Bulletin 2022-5

On June 30, 2022, Insurance Commissioner Ricardo Lara issued Bulletin 2022-5 (the Bulletin) “to remind all insurance companies [including nonadmitted/surplus lines insurers] and licensees of their obligation to market and issue insurance, charge premiums, investigate suspected fraud, and pay insurance claims in a manner that treats all similarly-situated persons alike.” The Bulletin discusses the need to “avoid both conscious and unconscious bias or discrimination that can and often does result from the use of artificial intelligence, as well as other forms of ‘Big Data.’” The Bulletin also highlights growing concern with “the use of purportedly neutral individual characteristics as a proxy for prohibited characteristics that results in racial bias, unfair discrimination, or disparate impact.”

The Bulletin explains that “before utilizing any data collection method, fraud algorithm, rating/underwriting or marketing tool, insurers and licensees must conduct their own due diligence to ensure full compliance with all applicable laws” and “provide transparency to Californians by informing consumers of the specific reasons for any adverse underwriting decisions.” The Bulletin also directs “all persons engaged in the business of insurance in California,” in the broadest terms possible, “to review all applicable laws and train their staffs on the proper application of all laws applicable to insurance.” Finally, the Bulletin asserts that the California Department of Insurance “reserves the right to audit and examine all insurer business practices including an insurer’s marketing, rating, claim, and underwriting criteria, programs, algorithms, and models.”

Connecticut Requires Insurers to Certify Compliance Annually

As previously reported, the Connecticut Insurance Department (CID) also published a reminder (Notice) that the state expects insurers and other licensees to be in full compliance with applicable state and federal laws respecting discrimination. Although not explicit, the CID requires annual certification by domestic insurers; the first such certification is due by September 1, 2022. The Notice reminds insurers that, as the California Bulletin above recites, the CID will be including anti-discrimination compliance in periodic examinations of insurers. The Notice also reminds insurers that, with respect to AI and ML, the CID will be examining technology whether developed internally or purchased from third parties. As to the scope of the CID’s reviews of the use of Big Data, in particular, the entire ecosystem of Big Data—from social media to the Internet of things—will be within scope.

KEY TAKEAWAYS

The above activity provides examples of increased scrutiny by insurance regulators concerning the use of new technology, including AI, ML and Big Data, and insurance regulators’ determination to confirm compliance with applicable state and federal anti-discrimination standards. The industry will continue to be asked (if not required) to provide survey responses as regulators review the usage of AI, ML and Big Data in various classes of business and for various other purposes. Regulators are beginning to serve notice that periodic financial and market conduct examinations will include these tools, as will presumably rate filings using algorithms developed with such tools. From an overall compliance perspective, all insurers and licensees should be thinking about the advisability of staff training, particularly in states such as California and Connecticut that will require filing periodic compliance certifications.

McDermott’s fully integrated Insurance Transactions and Regulation team represent a wide and diverse number of clients innovating in the insurance industry. Please do not hesitate to contact the authors of this article or your regular McDermott contact with any questions.