Overview
On May 22, 2025, the US House of Representatives passed the budget reconciliation bill, known as the “One Big Beautiful Bill Act,” which includes language that would prohibit any state from enforcing any law or regulation regulating artificial intelligence (AI) models, AI systems, or automated decisions systems for a 10-year period after the bill’s enactment.
This article examines the potential impact of this moratorium on AI regulation at the state level and what it would mean for developers and deployers.
In Depth
AI REGULATION: THE CURRENT LANDSCAPE
The AI landscape has been fast evolving, and the technology has a wide range of use cases that touch almost every aspect of individuals’ lives. The federal government’s response to this new technology’s implications, on the other hand, has been mixed. Different administrations, from Obama through this second Trump administration, have been largely circumspect, relying on broad policies slanted more or less toward principles of “responsible AI.” Instead of coming to a consensus on a set of requirements to provide guardrails for developers and deployers, Congress and the executive branch have limited their legislative and regulatory activities, called for research into AI, and generally directed regulatory bodies to consider ways to advance and regulate the technology. Even former US President Joe Biden’s executive orders, including the Blueprint for an AI Bill of Rights, were little more than unenforceable guiding principles.
Absent clear federal guidance, state legislatures have filled the void through piecemeal legislation. As a result, the current AI regulation landscape involves different legal requirements from one state to the next. While the existing regulation is limited to a handful of states, continuing down the path of overlapping state legal requirements will make it difficult for AI developers and deployers to efficiently comply with AI regulation across the country.
HOUSE REPUBLICANS LOOK TO DELAY REGULATION
In the absence of federal laws or regulations and in light of the piecemeal approach that state legislatures have taken, the House of Representatives passed the Artificial Intelligence and Information Technology Modernization Initiative as part of the House reconciliation bill. The initiative would place a 10-year moratorium on state enforcement of any law or regulation governing “artificial intelligence models,” “artificial intelligence systems,” or “automated decision systems.” The initiative explicitly states that its primary purpose is to remove legal impediments to facilitate the deployment and operation of AI.
The moratorium imposed by the initiative does not apply to state laws or regulations:
- The primary purpose and effect of which is to make it easier to deploy and operate AI.
- That does not impose certain “substantive” requirements on AI models unless the requirements are imposed under federal law or generally applicable to other models and systems the perform similar functions.
- That impose only reasonable fees and bonds on AI models and treat other similar models the same.
The moratorium’s carve-outs are vague and broad. The bill does not define which requirements would be considered “substantive.” This could make it easy for AI developers and deployers to argue that a state law or regulation is subject to the moratorium because it imposes a “substantive requirement.” However, it could also lead to more broad-sweeping state laws and regulations that attempt to cover far more technology than just AI models to be “generally applicable to other models” and avoid the moratorium.
The bill, including the initiative, passed the House of Representatives mostly along party lines with Republicans in support.
WHAT DOES IT MEAN FOR AI DEVELOPERS AND DEPLOYERS?
If signed into law, the initiative would fundamentally alter the regulatory landscape for AI developers and deployers. Any law purporting to regulate AI specifically would clearly be unenforceable under this bill (assuming it survives legal challenge). What is less clear is whether laws that may be used to restrict the development or deployment of AI tools, but which are not written specifically in relation to AI, would similarly be unenforceable. Some states have previously hinted at regulatory levers that may be utilized even absent state legislation that explicitly regulates AI.
On January 13, 2025, California Attorney General Rob Bonta released two legal advisories reflecting the view that AI may be regulated under existing laws, even if those laws do not directly reference AI. For example, Attorney General Bonta noted that the California Consumer Privacy Act prevents AI developers from processing personal information for nondisclosed purposes. He also explained that the California Invasion of Privacy Act restricts recording communications without the consent of the parties, which could impact AI training efforts. (For more information on the California attorney general advisories, see our On the Subject).
Similarly, the Oregon Attorney General released a guidance document on December 24, 2024, applying Oregon’s Consumer Privacy Act to AI. The attorney general explained that AI developers that use personal data to train AI systems must clearly disclose this use in an accessible and clear privacy notice, and they must obtain affirmative consent to use such personal data and provide an opportunity to revoke that consent.
These statements from the Oregon and California attorneys general underscore the ambiguity in the bill’s language. A more recent letter from 40 state attorneys general to the majority and minority leaders of the House and the US Senate highlights the likelihood of state efforts to enforce state laws, and the ongoing need for AI developers and deployers to be mindful of the range of implicated state laws. In the letter, the attorneys general noted that “[i]mposing a broad moratorium on all state action while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections.”
If enacted, the laws would also be subject to challenges in courts, which could interpret the moratorium’s provisions more broadly or, more likely because the moratorium would preempt state law, more narrowly. States may test the boundaries of the moratorium by enacting laws that, for example, on their terms apply to an industry broadly but which practically apply more narrowly, or by attempting to skirt the definitions used in the moratorium. Regardless of how states respond legislatively, however, AI developers and deployers would be wise to expect state attempts to continue enforcing their existing laws, which could significantly restrict the application of the moratorium. Developers and deployers should focus on governance and risk assessment mechanisms that will help prevent algorithmic discrimination, the acquisition and use of private information without appropriate authorization, and other violations of consumer protection laws and principles.
WHAT’S NEXT?
The initiative must now get through the Senate. It remains unclear whether the moratorium on state AI is permissibly included in the bill because it is potentially “extraneous” to the federal budget, and it faces opposition, as indicated by the action of most state attorneys general. Additionally, since the bill passed in the House, some Congress members who supported it have begun speaking out against this provision. Therefore, AI developers and deployers should not celebrate yet and should pay close attention to the initiative over the coming weeks as the “One Big Beautiful Bill Act” suffers through the inevitable legislative process.
* * *
To discuss the initiative’s potential legal implications for your business, reach out to one of the authors of this article or your regular McDermott lawyer.