White House executive order moves to restrict state AI legislation Skip to main content

White House executive order moves to restrict state AI legislation

Overview


On December 11, 2025, the White House issued an executive order (EO) attempting to restrict state-level artificial intelligence (AI) laws. This EO follows bipartisan legislative decisions to exclude preemption of state-level AI law provisions from two separate bills in 2025.

In the past several years, the number of state AI-related laws has significantly increased. In 2025 alone, 38 states adopted more than 100 laws relating to AI. Existing laws span consumer protection, employment, healthcare, election interference, and AI governance, to name a few. The administration’s stated goal is to maintain “global AI dominance” through a “minimally burdensome” framework. The EO sets out several measures and efforts, in furtherance of the administration’s desire to avoid a patchwork of state laws and regulations, to reduce barriers to innovation, and to ensure consistent oversight of interstate commerce.

In Depth


Implementing the EO

The EO’s first three measures for implementing a “minimally burdensome” AI framework focus largely on states with existing AI laws:

  1. AI litigation task force: The EO instructs the attorney general to establish, within 30 days, an “AI Litigation Task Force” whose “sole responsibility shall be to challenge State AI laws” that are inconsistent with the EO, including on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing federal regulations, or are otherwise unlawful in the attorney general’s judgment.
  2. Evaluation of existing state laws: The EO directs the secretary of commerce to publish, within 90 days, an evaluation identifying state AI laws that do not meet the standard of “minimally burdensome.” The secretary of commerce must do so in consultation with the special advisor for AI and crypto, the assistant to the president for economic policy, the assistant to the president for science and technology, and the assistant to the president and counsel to the president. This evaluation must identify “onerous laws that conflict with” the EO and laws that should be referred to the task force. The evaluation must, at a minimum, identify laws that “require AI models to alter their truthful outputs” or that “compel AI developers or deployers to disclose” information in violation of the First Amendment or other provisions of the Constitution.
  3. Conditions on federal funding: Executive agencies are directed to assess their discretionary grant programs to determine whether agencies may condition their grants on states’ decisions to not enact conflicting AI laws or to enter into agreements to not enforce existing AI laws. The secretary of commerce is directed to issue a policy notice within 90 days, that will outline states’ eligibility for leftover funds once a state fulfills obligations under the Broadband Equity Access and Deployment program. Specifically, the notice will clarify that states with “onerous” AI laws are ineligible for leftover federal funding for broadband access.

Next, the EO identifies three key areas in which federal agencies and the administration are directed to publish or to initiate proceedings to consider issuing standards and policy statements to provide guidance on potential preemption of state AI laws:

  1. Federal Communications Commission (FCC) standard: The EO directs the FCC chair, within 90 days and in consultation with the special advisor for AI and crypto, to initiate a proceeding to assess whether to adopt a federal reporting and disclosure standard that preempts existing state laws.
  2. Federal Trade Commission (FTC) policy statement: The FTC chair is directed, within 90 days, to issue a policy statement that addresses the applicability of the FTC Act’s “prohibition on unfair and deceptive acts or practices” to AI models. This policy statement must explain the extent to which state laws that “require alterations to the truthful outputs of AI models are preempted by the FTC Act.
  3. Legislative recommendation: The special advisor for AI and crypto and the assistant to the president for science and technology will prepare a “legislative recommendation establishing a uniform Federal policy framework for AI that preempts State AI laws that conflict” with the EO.

Exceptions to the legislative recommendation

The EO provides three explicit exceptions from preemption by the legislative recommendation and leaves room for the administration to determine future carve-outs. The EO clarifies that the legislative recommendation would not preempt laws involving child safety protections, AI compute and data center infrastructure, and state procurement and use of AI.

State-level laws that may be affected

A wide range of AI laws may be targeted for preemption. However, laws targeting algorithmic discrimination may be prioritized under this EO. For example, the EO references the Colorado Artificial Intelligence Act (SB24-205), which includes requirements designed to protect against “algorithmic discrimination,” as an example of a law that “may even force AI models to produce false results.” Other US state and local laws that contain specific provisions intended to address the risks posed by algorithmic bias include:

  • California’s automated decision-making technology (ADMT) regulations: Under the California Consumer Protection Act, businesses are required to perform risk assessments, provide notices and opt-out rights, and conduct cybersecurity audits when they use ADMT to make a “significant decision” about housing, education, employment, healthcare, or financial services. Parts of the ADMT regulations are scheduled to become effective on January 1, 2026.
  • California’s Fair Employment and Housing Act (FEHA) AI regulations: Effective October 1, 2025, these regulations extend the FEHA to cover automated decision systems (ADS) in employment contexts and prohibit discriminatory ADS, primarily targeting three compliance areas: bias testing, recordkeeping, and vendor liability.
  • Colorado Division of Insurance, 3 C.C.R. § 702-10: Colorado requires covered insurers to conduct prescribed testing before using predictive models, external consumer data, or algorithms to underwrite certain personal or small commercial lines, to ensure they do not result in unfair discrimination/disparate impact, and to adopt a governance and risk management framework.
  • Illinois’ Human Rights Act amendments: Effective January 1, 2026, these amendments will prohibit employers from using AI that discriminates against employees on the basis of a protected class.
  • New York City’s Local Law 144: Effective July 5, 2023, this law requires employers and employment agencies to conduct bias audits of automated employment decision tools (AEDTs) that are used to screen candidates or to substantially assist employers at any point in the hiring or promotion process. They must also provide notice to job applicants of the use of AEDTs and their right to request alternative evaluation processes, and they must publish audit results.
  • Texas’ Responsible Artificial Intelligence Governance Act (TRAIGA): Effective January 1, 2026, the TRAIGA prohibits AI systems from being developed or deployed to unlawfully discriminate against a protected class.
  • Utah’s Artificial Intelligence Consumer Protection amendments: Effective May 7, 2025, these amendments require businesses to make certain disclosures when providing a “high-risk” AI system that can be used to make “significant personal decisions” involving financial, legal, medical, or mental health services.

What now?

The EO seeks to restrict further state legislative efforts involving AI that do not meet a standard of “minimally burdensome” and to provide a framework for agencies to craft relevant standards. The scope and authority of the EO, however, are expected to face constitutional challenges concerning states’ rights in the coming months. While the EO could prompt Congress to act, consensus on the moratorium’s duration and terms is unlikely to happen quickly. This presents challenges for businesses that are investing time and resources in complying with, or preparing to comply with, potentially impacted laws.

The introduction of the EO is ultimately one factor among several that businesses will have to incorporate into their risk calculation when determining the best approach for their organizations. We will be closely monitoring this space for developments.

If you have questions or would like to discuss any issues related to this EO, contact your regular McDermott Will & Schulte lawyer or one of the authors.

Olivia Andrews, a law clerk in the New York office, also contributed to this client alert.