Overview
Generative artificial intelligence (GenAI) is becoming more prevalent in the workplace, including as a tool for human resources (HR) leaders to use in their employment practices. At the same time, close to a dozen states have enacted (or are considering) legislation regulating the use of GenAI in employment practices. Companies’ legal, HR, and IT teams should work together to review the technology they are deploying in the workplace for legal compliance with employment, privacy, and AI laws at the federal and state levels.
Using these burgeoning technologies without understanding how their algorithms work and the data they rely on to produce certain outputs can expose employers to potential class actions based on privacy, AI regulations, and employment claims – specifically, alleged disparate impact discrimination and wage and hour infractions. This article examines areas of focus for employers, recent legal developments governing the use of GenAI in the workplace, potential approaches to compliance, and next steps to avoid legal risks.
In Depth
GENERATIVE AI AT A GLANCE
AI can primarily be classified into two categories: “predictive” and/or “generative.” Predictive AI performs statistical analysis to predict an outcome. GenAI, while similarly premised on predictive elements, creates something completely new.
For purposes of this article, most AI programs contemplated for deployment in human resources and workforce management today rely on large language models (LLMs). A subset of generative AI, LLMs are learning models that are pretrained on vast amounts of data sources from a myriad of places, including the World Wide Web. LLMs take text inputs, analyze them, and provide an output based on the pretrained model. LLMs undergo a learning process, also known as “supervised learning,” which is a category of machine learning that utilizes pre-labeled datasets to train the AI to recognize patterns and predict outcomes.
To illustrate, a company might be contemplating a software system to significantly streamline human capital spent on reviewing and selecting candidates to interview for an open role from thousands of resume submissions. By using GenAI, the company may be able to sift through thousands of resumes in seconds based on criteria specified by the company. The desired outcome (i.e., selection of only candidates with a certain skillset or expertise) is called the “target.”
In a growing movement to maximize efficiencies in HR management, many companies are considering and utilizing GenAI for at least one or more purposes, such as:
- Recruitment (i.e., Greenhouse, Workable)
- Workforce management (i.e., Workday, BambooHR)
- Performance management (i.e., Talent Signal by Rippling)
- Payroll (i.e., Rippling, Gusto).
However, the vast universe of data an LLM might rely on – one of its key advantages – can also lead to pitfalls when not properly controlled. For instance, the LLM might predict or output a nonsensical or wholly inaccurate response (i.e., a “hallucination”).
For this reason, it is crucial to leverage GenAI not to make decisions in employment but, rather, as a tool to create efficiencies in human-led processes. More than ever, companies need to learn from their developers and vendors precisely how the supervised learning phase of a contemplated technology works and if diverse datasets are used to train the model that would increase the fidelity of the outcome.
LEGAL LANDSCAPE
Given the relative novelty of GenAI and LLM models, over the past few years state legislators have sought to place parameters on the development and use of these technologies. The parameters focus on addressing bias, imposing obligations on employers to self-audit the GenAI technologies they are implementing, and providing job applicants and employees with greater transparency by requiring notice of the use of GenAI in job application and workplace management processes. The following are a few notable examples:
- Colorado: Senate Bill 205 (eff. 2026). Colorado has enacted the most comprehensive legislation in the United States to date by requiring (1) notice to job applicants if AI assisted in an adverse decision and (2) an opportunity for applicants to challenge the decision and seek human review of their application. The state attorney general recently made a statement that the goal of the legislation is to target “flagrant” violations.
- Illinois: House Bill (HB) 3773 (eff. 2024). Employers who analyze video interviews using AI must (1) notify each employee in advance, (2) explain how AI works and what characteristics it is evaluating, and (3) obtain the applicant’s consent. Employers who rely solely upon AI analysis must also collect and report the race and ethnicity of job applicants to the Illinois Department of Commerce and Economic Opportunity. HB 3773 also amends the Illinois Human Rights Act (IHRA) to prohibit employers from using AI in a way that causes a discriminatory effect for any protected class under the IHRA.
- There are also several similar pending bills in California, Connecticut, Massachusetts, New Jersey, New York, Vermont, and Utah.
- The California Civil Rights Department has proposed regulations that, if approved, would go into effect either July 1, 2025, or October 1, 2025, and would advise that the absence of anti-bias testing of any automated decision-making system constitutes relevant evidence in discrimination claims, among other things. Presumably, this guidance will be leveraged to support future disparate impact discrimination claims against businesses using automated processes in workplace management.
At the same time, the Trump administration is moving toward governmental deregulation of AI development and, accordingly, issued Executive Order (EO) 14179, Removing Barriers to American Leadership in Artificial Intelligence, on January 23, 2025. The EO encourages the development of AI systems that are “free from ideological bias or engineered social agendas” and revokes certain Biden administration AI policies (EO 14110, published October 20, 2023) to promote US global leadership in AI development. Although EO 14179 is largely directed at governmental agencies, it nevertheless signals the Trump administration’s position as favorable toward the use and development of AI.
Courts are also seeing their fair share of activity in the GenAI workplace landscape. For instance, in Mobley v. Workday, Inc., 740 F. Supp. 3d 796 (N.D. Cal. 2024), a job applicant filed a class action against Workday alleging that Workday’s AI-driven applicant screening tools discriminated against him on the basis of his race, age, and disability. Mobley further alleged that he submitted more than 100 job applications to different employers who utilized Workday’s software (candidate skills match) and was rejected by each one. His theory of liability is that Workday’s algorithms led to disparate impact discrimination. The court denied Workday’s bid to get the case dismissed and is presently considering arguments to conditionally certify a class action. The court recently noted that “people were getting rejected ‘across the board’ from jobs for which they were generally qualified” and that the one common component was Workday’s machine learning job recommendations. The court hinted that if Workday was “creating what is effectively a common test, it is the common question . . .”
NEXT STEPS
Given the potential risks and landmines associated with implementing GenAI technologies in HR and workplace management processes, companies should carefully vet and address the following issues when implementing GenAI-based HR and employment management software:
- Comply with notice requirements. Although only a handful of states have passed employment-specific regulations on AI, more are soon to follow. These laws generally have a few principles in common, including obligations on users to alert applicants about the AI system in use, what the AI system is looking for, and how the business is using this data.
- Maintain records and implement strict privacy safeguards. Because AI systems rely on large volumes of applicant data, implement strong data security measures to protect against security breaches and protect applicants’ sensitive personal information.
- Preview AI results to assess the existence of potential bias. Conduct privileged internal review of the AI software being used to determine if a bias has developed and, if so, promptly remove it from the system. This includes review of inputs, how the technology undergoes the supervised learning process, and defined targets.
- Understand from AI vendors how their systems process data. Find out how the AI technology being contemplated or used creates criteria, controls, and final decisions. For example, if your company is working with a GenAI vendor to screen resumes, either work with the vendor to tailor the program so that it broadly and widely captures candidates or provide broad terms for skills matching to cast a wide net and provide (in writing, perhaps on the job posting itself) a mechanism for those who believe they are qualified but do not receive a follow-up an alternative method for submission.
- Collaborate with counsel to review vendor contracts. Review how a vendor might want to collaborate to combat potential issues that arise when implementing their AI technology.
- Develop proprietary AI. If business considerations permit, develop in-house AI tools to ensure compliance with federal, state, and local laws and to tailor its usage to business needs.
- Provide training on the AI system to staff involved in the hiring procedures. Ensure that your staff knows what to look for when implementing AI systems and train them on how to explain the AI systems to applicants that ask about the process.
- Allow individuals with disabilities to request reasonable accommodations. If implementing technology that could disparately impact individuals with disabilities, provide another easy to access option to those who request accommodations.
- Engage meaningfully with the workforce. Increasing technology, concerns about job loss, and highly publicized concerns about AI may cause unrest, which could lead to decreased morale. Take time to gauge workplace sentiments pertaining to AI and, when business needs allow, address employee concerns.
McDermott’s cross-practice team is at the forefront of the evolution and continued development of AI, including the legal implications and business impacts. For any questions about or for guidance in navigating the use and implementation of GenAI in the workplace for increased employee satisfaction and litigation avoidance, contact the authors of this article or your regular McDermott lawyer. Learn about our AI Toolkit.