Artificial Intelligence Law Center - McDermott Will & Emery



Given the hype around artificial intelligence and the expectations for it to be widely used, it is important to understand the legal implications of these new technologies. Our cross-practice team is closely monitoring the evolution and continued development of AI, including the legal implications and business impacts. This resource center will remain updated with the latest information and insights from our team.

Subscribe now to receive updates, and please get in touch with us to discuss any issues facing your business.





Healthcare policy leaders from McDermott+Consulting share their insights on the state of generative AI oversight by Congress and federal agencies and how companies can actively participate in the burgeoning AI policymaking process and the development of regulations governing AI in healthcare. They also provide tips on securing Medicare coverage for such innovative offerings.

Learn how healthcare providers and health services companies can seize the opportunities presented by generative AI and large language models while navigating the industry’s fragmented and complex regulatory landscape. We’ll explore which regulations from the US, EU and UK you should be watching, types of liability raised by health AI and offer practical steps your organization can take to develop, implement and acquire AI solutions.

Discover how AI’s rapidly growing role in the workplace—including using AI tools to enhance deliverables, streamline operations, assist with recruitment and supervise employees—is catching the attention of the EEOC and other regulators. We’ll share proactive compliance tactics and best practices to maximize AI’s benefits while minimizing legal risks.

Ensure your business thrives at the intersection of innovation and regulatory compliance by understanding how AI impacts advertising and competition. This webinar will explore developments related to enforcement of AI and identify potential regulatory landmines your company may face while navigating the evolving AI landscape.

Learn why corporate governing boards should leverage emerging AI guidance and regulatory frameworks to design their own framework for responsible oversight of AI development and deployment. Join AI subject-matter specialists who will share experiences and practical insights for designing and implementing governance oversight frameworks that will help to achieve the right balance between harnessing the enormous potential AI offers for transforming healthcare, keeping pace with warp speed pace of AI opportunities, and managing the many complex enterprise risks they present.

Explore the world of generative AI and related legal and business risks. We’ll address practical considerations for using AI tools such as ChatGPT, GitHub Copilot, Bard, DALL-E and Stable Diffusion, and provide you with a baseline understanding of the technology, risks involved and how organizations can manage such risks.

Prepare your business for the rapidly evolving risks and challenges posed by AI with best practices from our experienced privacy and cybersecurity lawyers. We’ll lay the foundation for shaping a secure AI-driven future—which includes developing an adaptable compliance strategy—so your company can avoid risk, capitalize on opportunity and preserve the integrity of the data you hold.

Learn how the surge in generative AI sparks new challenges for companies focused on protecting, enforcing and monetizing their intellectual property. We’ll set the stage for the emerging regulatory and legal considerations surrounding AI-assisted works and address potential hurdles to keep your company and its IP ahead of the AI curve.


United States

    • Summary: Senators Elizabeth Warren (D-MA), Michael Bennet (D-CO), Lindsey Graham (R-SC), and Peter Welch (D-VT) sent a letter to Senate Majority Leaders Chuck Schumer (D-NY) expressing support for a new independent federal agency to oversee and regulate large technology firms. The letter comes on the heels of the Schumer-led AI Insight Forum series, where the Senators say participants made clear Congress must regulate AI. Further, the letter cites the need for one agency to be acting across sectors, as opposed to a potentially fragmented approach across numerous federal agencies.
      As mentioned in the press release, Warren, Bennet, Graham, and Welch have all introduced legislation to create a dedicated agency to regulate dominant digital platforms on behalf of the American people. Last year, Graham and Warren introduced the Digital Consumer Protection Commission Act to establish a new commission to regulate online platforms, promote competition, protect privacy, protect consumers and strengthen national security. In 2022, Bennet and Welch introduced the Digital Platform Commission Act to create an expert federal agency able to regulate digital platforms to protect consumers, promote competition and defend the public interest.
    • Summary: The House Energy and Commerce Committee held a hearing titled, “Leveraging Agency Expertise to Foster American AI Leadership and Innovation,” to explore concerns and opportunities related to the development and use of AI, emphasizing the need for federal oversight and safeguards. Members discussed the implications of AI deployment in various sectors, including healthcare, privacy, and national security. The hearing focused on evaluating existing executive orders, legislative proposals, and strategies to strike a balance between harnessing AI’s transformative power and ensuring responsible, secure, and ethical AI development and implementation. Key health-related issues discussed included the importance of a comprehensive data privacy standard that addresses health information outside of traditional health records protected by HIPAA; concerns regarding the integration of AI in healthcare; and FDA’s active role in approving AI applications in healthcare, including FDA’s close coordination with ONC.
    • Summary: On November 15, 2023, Senators John Thune (R-SD), Amy Klobuchar (D-MM), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelley Moore Capito (R-WV), and Ben Ray Luján (D-NM), members of the Senate Committee on Commerce, Science, and Transportation, introduced the Artificial Intelligence (AI) Research, Innovation, and Accountability Act of 2023. The bill requires the National Institute of Standards and Technology (NIST) to facilitate new standards development and develop recommendations for risk-based guardrails. The bill also provides for increased transparency notifications regarding generative AI to users by large internet platforms, the performance of detailed risk assessments by companies deploying critical-impact AI, and certain certification frameworks for critical-impact AI systems. The bill also requires the Commerce Department to establish a working group to provide recommendations for the development of voluntary, industry-led consumer education efforts for AI systems. Additional information on the bill is available here.
    • Summary: In response to President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, OMB announced it is releasing for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. The draft policy focuses on three main pillars: (1) strengthening AI governance structures in federal agencies; (2) advancing responsible AI innovation; and (3) managing risks from the use of AI by directing agencies to adopt mandatory safeguards for the development and use of AI that impacts the rights and safety of the public. The comment period closed on December 5, 2023.
    • Summary: Senator Ron Wyden, chair of the Senate Finance Committee, Senator Cory Booker, and Representative Yvette Clarke introduced the Algorithmic Accountability Act of 2023, which regulates AI systems making “critical decisions,” defined to include decisions regarding health care. The bill requires companies to conduct impact assessments for effectiveness, bias and other factors when using AI to make critical decisions, and report select impact assessment documentation to the FTC. The bill also tasks the FTC with creating regulations that provide assessment instructions and procedures for ongoing evaluation, requires the FTC annually publish an anonymized report to create a public repository of automated critical decision data, and provides the FTC with resources to hire 75 staff and establish a Bureau of Technology to enforce the law. The sponsors stated the bill is intended to be “a targeted response to problems already being created by AI and automated systems.” A summary of the bill is here. The full bill text is here.
    • Summary: On September 12, 2023, the Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing titled “Oversight of A.I.: Legislating on Artificial Intelligence.” The purpose of the hearing was to discuss oversight of AI, including the harms and benefits of AI both domestically and internationally. Committee members examined potential safeguards and oversight efforts that will balance innovation and privacy protections. Contact us to request a full summary of the hearing. Key takeaways include:
      1. AI is expanding in use and, with that, poses a potential harm to national security.
      2. Concerns were expressed about AI misinformation in the upcoming 2024 election, the negative impacts of AI on children, and the harm of data collection and AI use overseas with different regulations than in the U.S.
      3. Concerns were expressed regarding loss of jobs to AI, though there was also recognition of the economic benefits of AI.
      4. Committee members emphasized the need for bipartisan support for carefully crafted legislation.


United States

    • Summary: The White House released a fact sheet detailing actions taken by the Biden-Harris administration to strengthen AI safety and security, consistent with several directives in the administration’s AI Executive Order (EO) issued in October 2023. The fact sheet highlights that the administration has completed all of the 90-day actions outlined in the EO and advanced other directives that the EO tasked over a longer timeframe. The progress update highlights that the administration has used Defense Production Act authorities to compel some AI system to report information, including AI safety test results, to the Department of Commerce. In addition, the Department of Commerce has proposed a draft rule that proposes to compel US cloud companies that provide computing power for foreign AI training to report these activities. The administration has also completed risk assessments covering AI’s use in every critical infrastructure sector. The White House AI Council, consisting of top officials from a wide range of federal departments and agencies, has been convened to oversee these efforts. You can read more about the AI EO’s impact on healthcare and future implementation deadlines in our On the Subject.
    • Summary: The White House announced a voluntary “commitment” from 28 healthcare provider and payor organizations who develop, purchase and implement AI-enabled technology for their own use in healthcare activities to ensure AI is deployed safely and responsibly in healthcare. The fact sheet is available here. Specifically, these companies are committing to:
        1. Developing AI solutions to optimize healthcare delivery and payment by advancing health equity, expanding access, making healthcare more affordable, improving outcomes through more coordinated care, improving patient experience and reducing clinician burnout.
        2. Working with their peers and partners to ensure outcomes are aligned with fair, appropriate, valid, effective and safe (FAVES) AI principles, as established and referenced in HHS’s Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule.
        3. Deploying trust mechanisms that inform users if content is largely AI-generated and not reviewed or edited by a human.
        4. Adhering to a risk management framework for applications powered by foundation models.
        5. Researching, investigating and developing AI swiftly but responsibly.


    • Summary: The White House announced a voluntary “commitment” from seven leading AI companies to help advance the safe, secure and transparent development of AI technology. Specifically, the companies are committing to:
      1. Internal and external security testing of their AI systems prior to release
      2. Sharing information on managing AI risks with industry, government, civil society and academia
      3. Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights
      4. Facilitating third-party discovery and reporting of vulnerabilities in their AI systems
      5. Developing robust technical mechanisms, such as watermarking systems, to ensure that users know when content is AI generated
      6. Publicly reporting the capabilities, limitations and areas of appropriate and inappropriate use of their AI systems
      7. Prioritizing research on the societal risks posed by AI technology, including avoiding harmful bias and discrimination, and protecting privacy and
      8. Developing and deploying advanced AI systems to help address important societal challenges.

      The White House also announced that the Office of Management and Budget (OMB) will soon release draft policy guidance for federal agencies to ensure that the development, procurement and use of AI systems centers around safeguarding the American people’s rights and safety.

    • Summary: Representatives Ted Lieu (D-CA), Ken Buck (R-CO), and Anna Eshoo (D-CA), along with Senator Brian Schatz (D-HI), introduced a bill that would create a National Commission on Artificial Intelligence (AI). The National AI Commission Act would create a national commission to focus on the question of regulating AI and will be tasked with reviewing the United States’ current approach to AI regulation, making recommendations on any new office or governmental structure that may be necessary, and developing a risk-based framework for AI. The group would be comprised of experts from civil society, government, industry and labor, and those with technical expertise coming together to develop a comprehensive framework for AI regulation, and include 20 commissioners, of whom 10 will be appointed by Democrats and 10 by Republicans. A one-pager is here. The full bill text is here. The bill would also require three reports:
      • Interim report: At the six-month mark, the commission will submit to Congress and the President an interim report, which will include proposals for any urgent regulatory or enforcement actions.
      • Final report: At the year mark, the commission will submit to Congress and the President its final report, which will include findings and recommendations for a comprehensive, binding regulatory framework.
      • Follow-up report: One year after the final report, the commission will submit to Congress and the President a follow-up report, which will include any new findings and revised recommendations. The report will include necessary adjustments pertaining to further developments since the final report’s publication.
    • Summary: The White House announced new actions to advance the research, development and deployment of responsible AI. The announced actions include an updated National AI Research and Development Strategic Plan, updated for the first time since 2019 to provide a roadmap that outlines key priorities and goals for federal investments in AI research and development. The announcements also include a new Request for Information (RFI) on National Priorities for Artificial Intelligence to inform the Administration’s ongoing AI efforts, as well as a new report from the US Department of Education’s Office of Educational Technology on AI and the Future of Teaching and Learning: Insights and Recommendations, summarizing the risks (including algorithmic bias) and opportunities related to AI in teaching, learning, research and assessment. OSTP sought public comment on a variety of topics in the RFI, including questions relating to protecting rights, safety, and national security; advancing equity and strengthening civil rights; bolstering democracy and civic participation; promoting economic growth and jobs; and innovating in public services. Comments were due on June 7, 2023.
    • Summary: The President’s Council of Advisors on Science and Technology launched a working group on generative AI to help assess key opportunities and risks, and provide input on how best to ensure that these technologies are developed and deployed as equitably, responsibly, and safely as possible. The working group, which held its most recent public meeting on Friday, May 19, 2023, invites submissions from the public on how to identify and promote the beneficial deployment of generative AI, and on how best to mitigate risks. The call for submissions outlines five specific questions for which the working group is seeking responses. Submissions were due August 1, 2023.
    • Summary: The White House announced new actions to further promote responsible American innovation in AI and protect people’s rights and safety. The actions include announcing $140 million in funding to launch seven new National AI Research Institutes, an independent commitment from leading AI developers to participate in a public evaluation of AI systems and draft policy guidance by the Office on Management and Budget on the use of AI systems by the US government for public comment. The White House noted that these steps build on the administration’s previous efforts to promote responsible innovation, including the Blueprint for an AI Bill of Rights and related executive actions announced in Fall 2022, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier in 2023.
    • Summary: The Civil Rights Division of the United States Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC( and the US Equal Employment Opportunity Commission (EEOC) released their “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems,” which reiterates each agency’s commitment to applying existing legal authorities to the use of automated systems and innovative new technologies.
    • Summary: NTIA issued a request for comment on an AI accountability ecosystem. Comments were due on June 12, 2023. Specifically, NTIA sought feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems.
    • Summary: This document establishes five principles and associated practices to support the development of policies and procedures to protect civil rights and promote democratic values in the design, use and deployment of AI systems.