New AI Executive Order Seeks to Preempt State AI Laws
On December 11, 2025, the Trump Administration issued a new Executive Order, "Ensuring a National Policy Framework for Artificial Intelligence" (EO), which was intended to encourage AI development and ensure the U.S. "wins the AI race" by preempting or penalizing burdensome state regulation. The EO follows the release earlier this year of the Administration's AI Action Plan ("Winning the Race"), and accompanying Executive Orders purporting to prevent "Woke" AI in the federal government. (DWT's discussion of these earlier actions is here.)
States are certain to challenge any concrete steps the Administration takes to implement this EO.
Concerns about State AI Regulation and Steps to Address Them
The EO notes three concerns with state AI regulation: (1) a "patchwork" of different state regimes would make compliance inherently difficult; (2) certain regulations may require AI developers to "embed ideological bias within models"; and (3) state regulation impinges on interstate commerce.
The EO itself does not preempt any state laws. Instead, it directs four federal agencies (DOJ, Commerce, the FTC, and the FCC) to attack those laws, senior White House officials to prepare a legislative recommendation that would preempt conflicting state AI laws, and directs all federal agencies to use funding as leverage to force states to follow federal policy.
- DOJ "AI Litigation Task Force." Within 30 days, the Attorney General must create a task force to challenge state laws deemed inconsistent with the EO's "minimally burdensome national policy framework for AI." Potential grounds for challenge include interference with interstate commerce, preemption by federal regulation, and First Amendment and other constitutional violations.
- Commerce Evaluation of State Laws. Within 90 days, the Secretary of Commerce must identify "onerous" state AI laws that conflict with the EO's framework and flag them for referral to the DOJ task force. At a minimum, Commerce must identify laws requiring AI models to alter "truthful outputs" or compel disclosures or reporting "that would violate the First Amendment or any other provision of the Constitution."
- FTC policy statement. Within 90 days, the FTC must issue a policy statement describing how Section 5 of FTC Act applies to AI, including when the ban on unfair or deceptive practices (UDAAP) would preempt state laws that "require" alterations to "truthful outputs" of AI models.
- FCC preemption-oriented proceedings. Within 90 days after Commerce publishes its evaluation, the FCC must initiate proceedings to consider a federal reporting and disclosure standard for AI models that would preempt conflicting state laws.
- Legislative recommendation for a federal AI framework. The EO directs senior White House officials to prepare a legislative recommendation establishing a uniform federal policy framework that would preempt conflicting state AI laws, and that also ensures "that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded," and thus not preempt state laws on "child safety protections," "AI computing/data center infrastructure," "state government procurement and use of AI," and "other topics as shall be determined."
- Use federal funding as leverage. All federal agencies must use federal funding as leverage to encourage state compliance with the EO's policies. The NTIA, specifically, is, "to the maximum extent permitted by Federal law," to withhold certain federal broadband funding from states with laws out of sync with the EO's policies. All other agencies are to "assess their discretionary grant programs" to see if they can condition new or continued funding on compliance with the EO's policies.
Challenges with Preemption
Congress presumably has the power to pass nationally binding AI legislation that would preempt conflicting state law (although, to the extent that AI regulation might conflict with the First Amendment, as the EO suggests, federal legislation would be subject to the same constraint). But Congress has not passed any AI regulatory legislation—and will be unable to do so without reaching consensus on what a national AI legislative framework should look like. This seems unlikely in the near term.
The result is that the success of Administration's efforts to strike down state AI laws will depend on existing federal laws and regulations. While we can't predict the outcome of all potential attacks on state AI laws, there are reasons to think the Administration will have an uphill battle, at least on some fronts.
Scope of FCC Authority
The EO looks to the FCC to promulgate binding, preemptive regulations establishing a "reporting and disclosure standard for AI models." Broadly speaking the FCC has regulatory authority over telecommunications services, spectrum allocation and usage (including broadcast and media ownership), and cable/video services. AI, by contrast, would seem most naturally to be treated as an "information service," over which the agency lacks regulatory authority. So, the statutory basis for and possible scope of preemptive "reporting and disclosure standards" the EO envisions are unclear.
Using Federal Funding as Leverage
The EO directs all federal agencies to investigate using discretionary funding as leverage to encourage states to act in accordance with the EO's policies. Whether it is lawful to condition federal funding to a state under a given program—including non-deployment funding under the BEAD program—based on the state's AI laws will depend to some extent on the specific statutes and regulations governing the funding program at issue. Disputes between universities and the Administration about research funding illustrate the challenges the Administration will face in using federal funding as leverage.
"Truthful Output" and Deceptive Practices
The EO directs the FTC to issue a policy laying out when a state requirement for an AI developer to alter its model's "truthful outputs" might be an unfair or deceptive act or practice. At the outset, while the FTC's policy would not have any preemptive effect on state law, FTC policies can lead to investigations and sanctions on non-compliant businesses.
On the merits—and the EO isn't explicit on this point—the underlying prohibition on altering "truthful outputs" because some state laws might "embed ideological bias within models," might target state requirements that AI models not be "biased."
One aspect of this concern relates to "machine learning" (ML) models as well as generative AI models that analyze large datasets looking for patterns that people might not see and generate output reflecting those patterns. Bias in the context of this type of AI arises, for example, when the data set the model trains on is, itself, full of biased information (such as, potentially, historical decisions about hiring or lending, or other material that evidences discrimination). When that happens, relying on the AI for decisions and content going forward will simply perpetuate the historical bias.
As a potential example of this concern, the EO asserts that Colorado's law banning unlawful differential treatment or impact by means of an AI may "force AI models to produce false results in order to avoid a 'differential treatment or impact' on protected groups." Colorado, of course, will argue that its law prevents the perpetuation of unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their race, religion, or other classification protected under Colorado or federal law.
If this type of law indeed is the underlying concern of the EO, the FTC may find it challenging to explain how state laws designed to address the issue of bias lead to "unfair" or "deceptive" acts or practices.
A First Amendment Conundrum
The EO calls out "laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information," as raising First Amendment concerns. This call-out frames these requirements as compelled speech. In defending their laws, states will argue that disclosure requirements are product-safety and consumer-protection measures regulating conduct and preventing deception. Early litigation will likely hinge on whether AI outputs and transparency duties are treated as protected expression or regulable conduct/factual disclosure, and whether mandated disclosures can be characterized as "factual and uncontroversial."
First Amendment concerns about "alter[ing] truthful outputs" are legally much more complex. For starters, it isn't clear that AI outputs are First-Amendment-protected speech at all, as indicated by the comments of several justices in their concurrences in Moody v. NetChoice, 603 U.S. 707 (2024). Second, if AI outputs are speech, whose speech are they? LLM outputs, for example, arise from the probabilistic interaction of a user's prompt and the billions of "parameters" embedded in the model. So, are those outputs the speech of the user? The developer? The AI model itself? Third, AI models, particularly GenAI, are notorious for generating "hallucinations"—false or nonsensical statements—which is why AI users are reminded that the outputs may be false. Steps to avoid hallucinations would seem unlikely to be "unfair" or "deceptive" acts for purposes of the Federal Trade Commission Act.
Dormant Commerce Clause?
The EO calls out the potential for a "patchwork" of state AI laws "impinging" on interstate commerce. This seems to be a reference to the so-called "dormant" Commerce Clause, under which a state law that on its face relates only to intrastate activity can be unconstitutional if it has the effect of unduly burdening interstate commerce. The claim would be that, in practical terms, complying with a single state's AI law forces developers to follow that law everywhere.
The viability of this kind of claim against a state's AI laws will be technical and fact specific. It is straightforward (using IP addresses and other techniques) to closely estimate the location of a user of an online system or mobile app. So, AI developers will know the state from which a user is accessing the system. A key question—highly technical—will be whether the AI developer can configure its system to comply with that one state's requirements separately, or whether, instead, the only practical choice is to incorporate that state's requirements in the model made available to all users. If complying with the state's requirements will require model configurations that affect all users, then a court would need to weigh the purported burden on interstate commerce against local benefits. It is impossible to answer these questions in the abstract, but they are sure to be hotly litigated if and when the Administration brings a challenge on this ground against any specific state law.
How Legal, Policy, and Compliance Teams Can Prepare
Both AI developers and AI users should plan for a period of regulatory ambiguity and multi-front disputes. Practical steps include:
- Map state-law exposure to the EO's "onerous law" criteria. To anticipate litigation that may affect operations, AI developers and users should build a jurisdiction-by-jurisdiction inventory of state-level AI obligations, focused on those that the EO flags as suspect. Note that process-based governance (testing, risk assessments, documentation), may be more defensible against compelled-speech arguments than output/content-adjacent obligations.
- Prepare for uneven enforcement and uncertain preemption. The most immediate impact may be enforcement chill rather than invalidation of any specific state law. Some states may continue to pursue aggressive enforcement, while others may slow enactment of laws or rules to reduce litigation exposure. Some states may even amend their laws or rules to avoid Dormant Commerce Clause or compelled-speech challenges.
- Engage early in FCC and FTC processes. Treat the required FCC and FTC proceedings as "front door" levers that could shape the federal government's approach and thus shape future litigation. At the FCC, the consequences of finding statutory authority over AI in the Communications Act will be profound and should be considered carefully in framing comments. At the FTC, monitor whether the policy statement becomes a platform for enforcement and coordination with DOJ, or a predicate for implied preemption arguments.
- Track BEAD guidance and broader grant conditions closely. The BEAD policy notice is the EO's most immediate lever. Monitor how "non-deployment funds" and "onerous" are ultimately defined, whether the approach is narrow (targeting a limited set of output/content mandates) or expansive (capturing broader governance regimes), and whether any guidance would impose retroactive leverage on states already deep into BEAD approvals.
- Anticipate multi-front litigation. Expect overlapping suits rather than a single test case, including: (1) state-led challenges to grant conditioning (Spending Clause and statutory authority), (2) DOJ challenges to specific state laws (with companies participating as amici or intervenors), and (3) APA challenges to FCC actions or grant-policy implementation brought by states, industry, or civil society depending on breadth and burden.
- Maintain AI governance discipline. Even if some state requirements are delayed or contested, companies should remain focused on AI governance, compliance infrastructure, and risk management. Durable governance is important regardless of how preemption litigation evolves.
Conclusion
This EO does not immediately preempt state laws, but is instead a pressure-and-positioning instrument: it seeks to narrow the practical space for state AI regulation to survive, considering litigation risk, funding leverage, and administrative signaling, while laying the groundwork for possible congressional action. Any legal action the administration takes—challenging specific state laws, promulgating purportedly preemptive AI reporting and disclosure obligations, and FTC unfair or deceptive act or practice policies—are sure to be challenged in court by affected states. Even so, its near-term effects on state behavior, regulatory timelines, and compliance strategy are likely to be significant. AI developers and users should plan for prolonged ambiguity: chilled enforcement in some states, accelerated litigation in others, and heightened importance in how AI obligations are framed—as conduct regulation, consumer protection, or speech.
Chris Savage and John Seiver have decades of experience advising clients on complex Internet, telecommunications, wireless and cable regulatory issues. K.C. Halm, co-lead of the Davis Wright Tremaine's AI Team, advises communications and emerging tech clients on emerging AI regulatory, compliance and governance issues. Stacey Sprenkel leads DWT's Compliance, Ethics, Risk & Governance practice and regularly assists clients with the development and implementation of AI governance and compliance and governance. Shannon McNeal helps tech and digital media companies navigate evolving legal landscapes. Sarah Wood advises clients on the commercialization of technology, with an emphasis on generative AI development and deployment.
Together, our multidisciplinary AI Team helps clients address the regulatory, compliance, governance, transactional, policy, product counseling, and IP issues stemming from the adoption of AI/ML applications. For more insights, contact the co-authors or another member of our Technology, Communications, and Privacy and Security practice and sign up for our alerts.