AI: Understanding What's Ahead in Employment - McDermott Will & Emery

Key Takeaways | AI and the Next Frontier: Understanding What’s Ahead in Employment


During this webinar, Employment Partner Joseph Mulherin and Emily Underwood, Associate Clinical Professor of Law and Bluhm-Helfand Director of the Innovation Clinic at the University of Chicago Law School, discussed how artificial intelligence’s (AI) rapidly growing role in the workplace is catching the attention of the US Equal Employment Opportunity Commission and other regulators. They shared proactive compliance tactics and best practices to maximize AI’s benefits while minimizing legal risks.

Top takeaways included:

  1. Vet the AI vendor and product. Employers are increasingly adopting employment-related AI tools for recruiting/interviewing; onboarding; performance management; payroll and benefits; timekeeping; scheduling; and employment records management. When appropriately designed and applied, these tools can provide significant efficiencies while lowering costs. However, if these tools—many of which are sold off the shelf—cause the employer to run afoul of the law, the employer (and not the vendor) will in most circumstances be responsible for any unlawful conduct and damages. Many vendor contracts include language disclaiming liability, holding the vendor harmless and requiring that the employer indemnify the vendor for any legal exposure. Before acquiring and/or using AI off-the-shelf products, employers should ensure that the AI tools have been adequately vetted and tested for compliance with the law. Employers should also ensure that their vendors have adopted certified programs designed to comply with the various employment laws.
  2. Don’t trust that all AI products are compliant. When properly designed and tested, AI products should help employers become even more compliant with the law. Indeed, a properly designed recruiting and interviewing tool that screens resumes based on non-discriminatory, job-related keywords can reduce the probability of discrimination that might otherwise occur through human selection bias. However, the design of some AI products can inherently expose employers to unlawful conduct.
    1. For example, some AI timekeeping tools offer to take time recording responsibilities away from the employees and to automatically record their time worked during the workday. These technologies measure whether the employee is working by tracking the employees’ movements and/or computer activity during the workday. The obvious concern is that the employee could engage in work away from the watchful eye of the computer or video camera, which could subject the employer to an unpaid overtime lawsuit.
    2. As another example, some AI tools allow companies to modify employee scheduling based on customer demand and/or workflow. However, such a tool could run afoul of fair scheduling laws (like Chicago’s ordinance) that require advance notice and penalties for inadequate shift changes. Employers should involve their legal counsel when considering a new AI tool.
  3. Consider whether AI monitoring tools will impact employee morale. Employers are increasingly using AI tools to monitor employee activity and conduct during the workday. These AI monitoring records are being used, for example, to monitor whether employees are meeting productivity standards, engaging in time fraud and/or engaging in misconduct on the job. While employers have the right to monitor employees’ performance and conduct in the workplace (subject to state privacy laws), employers should remain mindful that employees may not appreciate Big Brother watching their every move. Relatedly, employers should understand that unhappy employees are more likely to file claims with governmental agencies like the National Labor Relations Board (NLRB). The NLRB has threatened to bring enforcement actions against intrusive and/or abusive employer electronic monitoring and automated management practices that chill union activities or interfere with Section 7 rights.
  4. Monitor AI closely. Users can use AI to generate information that a human then considers when making a decision; however, AI should never stand in for human judgment or be allowed to operate unsupervised. Take care to ensure that AI tools are being used appropriately and audit results frequently to ensure that the AI is still achieving the desired effect. Never blindly trust that an AI tool is functioning exactly as you would like or in a way that reduces liability.

Explore AI’s implications in more industries and service areas, and how you can seize opportunities and mitigate risk across the legal, business and operational areas of your organization, through our webinar series.


Emily Underwood, Associate Clinical Professor of Law and Bluhm-Helfand Director of the Innovation Clinic at the University of Chicago Law School

Get In Touch