In a recent joint statement, several federal agencies warned that they will commit to enforcing their separate regulations against developers, deployers and users of AI systems, specifically citing civil rights, fair competition, consumer protection and equal opportunity concerns. Federal Trade Commission (FTC) Chair Lina Khan and officials from the US Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB) and the US Equal Employment Opportunity Commission (EEOC) each reinforced their concerns about automated systems. Their serious language, joint public commitment and previous enforcement actions in this area make this statement no simple theater.
Recent attention and increasingly widespread use of AI have led the FTC to issue a series of warnings this year about AI advertising. These warnings follow the FTC’s guidance from April 2021 on fairness and equity and its June 2022 report to study “how artificial intelligence (AI) may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified online harms.” The FTC’s new Office of Technology, designed to “strengthen the FTC’s ability to keep pace with the technological challenges in the digital marketplace by supporting the agency’s law enforcement and policy work,” is expected to take the lead in this area.
The other federal agencies involved have each issued separate guidance in their respective fields. The use of automated decision-making has been a topic of legislative action from a variety of state and municipal actors, including California and New York City. The DOJ and EEOC have repeatedly warned about disability and employment discrimination using AI tools.
Companies considering the use of automated decision-making tools in their hiring or employment practices are advised to pay careful attention to these regulations and legislation, which may affect existing hiring practices previously considered industry-standard.
“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats[.] Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”
Notably, the joint statement includes explicit warnings about deploying AI in a variety of contexts, noting:
“AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. The FTC has [ ] warned market participants that it may violate the FTC Act to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks. Finally, the FTC has required firms to destroy algorithms or other work product that were trained on data that should not have been collected.”
In a related comment, CFPB Director Rohit Chopra argued that “Unchecked ‘AI’ poses threats to fairness and to our civil rights in ways that are already being felt.”
This federal warning is a shot across the bow for a variety of industries who may have been considering using AI in a broad set of circumstances, increasing uncertainty related to compliance in an area already fraught by concerns regarding copyright, professional ethics and simple questions to do with the effectiveness of products which may be plagued with falsehoods.
The EEOC issued a technical assistance document in May 2022 “explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees.”
As mentioned above, the FTC has historically and recently been very active in this space, publishing:
Emphasis on the importance of consumer trust in AI technology and concerns about how companies deploy AI technology, including generative AI tools;