On March 20, 2026, the Trump Administration released a "National Policy Framework" (Framework) for AI, containing more than two dozen separate bullet-point recommendations for what Congress should, and should not, do. Building off the December 2025 Executive Order (EO) outlining a vision for preempting state AI laws, and the Administration's earlier "AI Action Plan," the Framework outlines policy principles that focus on what White House AI "Czar" David Sacks has called the "5 Cs": (1) child safety, (2) communities, (3) creators, (4) censorship, and (5) competitiveness.

The Framework sets forth proposals in these areas, but does not offer details on how to translate its proposals into legislation. Even so, it offers some insights into the Administration's evolving AI policy priorities, and stands in stark contrast to the comprehensive national regulatory scheme reflected in the EU AI Act.

The Framework organizes its recommendations into seven categories, five of which track the "5 Cs" noted above:

Broadly speaking, the recommendations seek to encourage AI development—and to preempt state laws that would inhibit it—while preserving traditional state power to protect children from exploitation and consumers from fraud. At the same time, the recommendations addressing censorship and free speech echo the Administration's longstanding concern that states might impose constraints on AI outputs to meet objectives that the Administration views as "woke" and therefore inappropriate.

The Framework does not propose any legislative language and does not indicate that any will be forthcoming. Considering the large number of issues in play, the lack of consensus regarding how to resolve the policy conflicts the Framework raises, and the fact that 2026 is an election year, the likelihood of near term Congressional adoption of the Framework, or even any of its specific proposals, is remote.

Key Takeaways

  • The Framework contains dozens of suggestions for Congressional action but no specific legislative language.
  • Its proposals are wide-ranging—from protecting children to ensuring national security personnel have sufficient AI expertise to avoiding censorship to protecting traditional state zoning authority—but do not resolve, or sometimes even acknowledge, the conflicting interests that they would affect.
  • The scope of preemption of state AI laws seems modest in comparison to earlier Administration positions regarding the role of states in regulating AI development and deployment.
  • Legislative action on the Framework's proposals is highly unlikely in the near term.

Section I: Protecting Children and Empowering Parents

For an administration that has made removal of regulations burdening AI development a priority, it is notable that the Framework leads by urging Congress to "affirm" that existing child privacy regulations apply to AI services and to limit the collection and processing of children's personal information for model training and targeted advertising. This is largely consistent with existing federal law and with the trend of state general privacy laws that specifically regulate the collection and processing of children's personal information and, in some cases relevant to AI technology, prohibit it altogether.

The Framework also asks Congress to give parents tools to manage their children's privacy and exposure to online content, and to establish privacy-protective age assurance requirements for online services, which the Framework does not limit to AI services. It is unclear how Congress—as opposed to entities in the AI and tech sectors—could provide such tools, which raises the question whether the Framework contemplates a new regulatory obligation on AI providers. Moreover, the Framework does not define "child," leaving it unclear whether the Administration intends these protections to cover everyone under the age of 18, or instead is considering relying on the definition of "child" from the Children's Online Privacy Protection Act, which is limited to those under 13.

In addition to children's privacy, the Framework urges legislation that would require platforms "likely to be accessed by minors" to reduce risks of sexual exploitation and self-harm to minors and notes the passage of the "Take It Down Act" directed at protecting children and adult victims from deepfake abuse. At the same time, it asks Congress to avoid preempting states from enforcing their own laws directed at the above practices, indicating that this is an area that the Administration views as appropriate for state-level AI regulation.

Perhaps in recognition of the ambiguities in its own proposals—and on a deregulatory note—the Framework seeks to avoid "ambiguous standards about permissible content," noting that such standards would likely lead to litigation. This raises a perennial issue in this space, which is how to define content which can be banned as to children but not as to adults. In this connection, many state and federal laws associated with these initiatives in the past—such as age-appropriate design and age-assurance and gating laws—have been litigated and, in some case, enjoined. Several of the Framework's proposals in this area would likely face significant legal challenges if they result in legislation.

Section II: Protecting Communities

Section II of the Framework groups several disparate issues broadly related to AI development and deployment under the "Communities" heading.

First, it requests Congress to ensure that new data centers do not lead to increased residential electricity rates, noting that the White House has asked major deployers to voluntarily accept a pledge to avoid that result. The challenge with this point is that regulation of residential electricity rates has traditionally been an intrastate activity, handled by state-level public utility commissions. Achieving this objective, even if possible, might require some preemption of state regulatory authority, which could be controversial.

Second, the Framework requests that Congress "streamline" any federal permitting requirements applicable to the construction and operation of data centers—including on-site electrical power generation—and other AI infrastructure. While some federal permitting requirements may apply to on-site power generation (e.g., certain waterways), most land use/permitting issues surrounding AI data centers are handled at the state and local level. As we have noted recently, many states (including Washington) are considering restrictions on the placement and construction of new data centers. Notably, the Framework specifically does not propose to preempt state and local zoning decisions.

Third, this section of the Framework calls on Congress to "augment" law enforcement efforts to fight AI-enabled impersonation scams and other types of AI-enabled fraud targeting vulnerable populations. Where such bad acts are conducted via the internet or telephone, existing federal laws governing (for example) wire fraud would seem to already provide a legal basis for federal law enforcement action. This aspect of the framework is thus likely suggesting additional funding and other resources for federal law enforcement efforts on this score.

Fourth, this section asks Congress to ensure that agencies in the "national security enterprise" have adequate technical capability to understand frontier AI models and to assess national security risks those models create. This recommendation is likely to be entirely uncontroversial.

Finally, this section asks Congress to "provide AI resources to small businesses," including grants, tax incentives, and technical assistance to support wider deployment of AI tools across American industry. Existing AI developers are working hard to market their models, and use of AI is already exploding across industries and businesses large and small. Even though help may not necessarily be needed, this recommendation is likely to be uncontroversial, because broadly speaking there is a bipartisan consensus for supporting small businesses and their use of AI to keep up with this major technological shift.

Section III: Respecting Intellectual Property Rights and Supporting Creators

The Framework states the Administration's view that training on copyrighted materials is not infringement. Even so, its proposals regarding copyright issues largely punt to the courts, asking that Congress let the courts sort out whether training on copyrighted materials constitutes fair use. At the same time, it suggests that Congress consider legislation that would enable the creation of voluntary licensing frameworks or collective rights systems without fear of antitrust liability. Carefully stating that such legislation should not dictate when and whether licensing is required, the Framework maintains consistency with its stance that courts should decide questions of fair use. In essence, this proposes legislation authorizing an optional business solution for content creators to obtain financial value from use of their works and for AI developers to avoid risk when using copyrighted content.

The Framework's "hands off," "let the courts decide" approach to copyright issues starkly differs from the comprehensive regulatory approach in the EU AI Act, which requires transparency concerning training data and an opt-out mechanism allowing rightsholders to exclude their works from AI training. In effect, the Framework acknowledges the differing interests of AI developers and content creators, but offers no real resolution of their many disputes.

The Administration's ambivalence about copyright and AI training does not extend to deepfakes. While acknowledging potential First Amendment issues, it suggests that Congress consider a "national framework" restricting unauthorized distribution or commercial use of digital replicas of an individual's voice, likeness, or other identifiable attributes. There is a reasonably robust body of state statutory and tort law on this topic, and the Administration does not suggest that this state-level law should be preempted (although that might be implicit in its suggestion of a "national framework"). The Framework also offers no specifics on how to achieve the balance it seeks between protecting individuals from exploitation via deepfakes and protection of free speech online in the form of parody, satire, news reporting, and First-Amendment-protected expression.

Section IV: Preventing Censorship and Protecting Free Speech

The concern about deepfakes interfering with online speech reflects concerns the Administration has previously expressed. At the same time, the Administration has also suggested—though perhaps without intending to—that AI platforms may limit political speech, precisely in order to avoid AI outputs that the Administration would regard as too "woke."

The Framework has two anti-censorship prongs: (1) preventing the federal government from coercing AI providers to require or prohibit content based on political agendas, and (2) asking Congress to create a remedy for Americans against federal government agencies that attempt to censor or require speech on AI platforms. Unlike prior administration efforts to address perceived bias in content moderation—which attempted to interpret or limit the protections that Section 230 of the Communications Act provides to online platforms—the suggestion in the Framework would protect private AI developers from speech-based regulation.

The Framework thus treats censorship risk primarily as a problem arising from government coercion. It suggests that the Administration will oppose regulatory regimes that it believes will push AI-output moderation in particular (ideological) directions, while challenging perceived government influence over platform speech.

Section V: Enabling Innovation and Ensuring American AI Dominance

The Framework calls for policies that remove barriers to innovation in order to accelerate deployment of AI applications across key industry sectors. To do so the Administration proposes that Congress establish regulatory sandboxes—supervised AI deployments with limited regulatory constraints—to permit innovation without raising other regulatory risks. It also calls for Congress to provide resources to make federal datasets accessible to industry and academia to support training and development of AI models. Finally, it opposes creating any new federal rulemaking body to regulate AI, instead proposing that existing, sector-specific regulators provide any necessary oversight of AI deployments in their respective sectors.

President Trump has consistently framed AI policy priorities in terms of the nation's competitiveness in the global AI market, often citing the need for U.S. companies to "win the AI race" or risk domination by AI developed in China or elsewhere. Notably, this argument is relegated to a limited portion of the Framework, following policies favored by some Republican governors who seek to preserve state authority over AI for child safety and parental control rights. This approach envisions a distributed regulatory model that prioritizes experimentation and sector expertise rather than top-down AI governance, and in this regard echoes the July 2025 Action Plan, which called for agencies to create these regulatory sandboxes and national standards in key sectors.

This aspect of the Framework marks another clear distinction between the EU's approach to regulating AI and the Administration's preferred approach. The EU AI Act is a comprehensive national policy that generally avoids sector-specific regulation; by contrast, the Framework expressly rejects an overarching national regulatory approach. On the other hand, the EU AI Act was one of the first AI regulatory regimes to articulate a role for regulatory sandboxes, which the Administration has now embraced.

For firms developing AI systems, the Framework signals a policy environment that may favor innovation, flexibility, pilot programs, and collaborative standards development. For clients building or integrating AI systems, regulatory sandboxes and expanded access to training databases could significantly accelerate product development and testing. At the same time, the Framework suggests that those entities should anticipate heightened engagement with existing regulators and standards organizations, rather than a single federal AI authority, when addressing compliance and governance questions—which may in practice complicate development and deployment efforts to a greater degree than a system with single AI regulator might have done.

Section VI: Educating Americans and Developing an AI-Ready Workforce

The Framework calls for policies to ensure that American workers benefit from AI through workforce development, new job creation, and expanded opportunities across sectors—not just from the outputs of AI innovation. Specifically, it calls for the use of non-regulatory methods to ensure both educational and professional environments benefit from AI-related training. The Framework does not articulate what these "non-regulatory methods" might be, but federal subsidies for education and training, as well as national guidelines or standards for what such education and training should cover would appear reasonable—if in more than a little tension with the Administration's overall stance on federal involvement in educational activities.

The Framework also encourages Congress to broaden federal initiatives on examining AI‑driven task‑level changes in the workforce to help guide policies that support and strengthen the American workforce. It also encourages Congress to use land-grant institutions (which often have longstanding agriculture, engineering, and technical programs) to provide technical support and create youth-targeted AI programs.

For regulators and policymakers, these recommendations underscore a recognition that AI governance is not limited to risk mitigation, but also includes addressing the transition of the economy to include AI, and associated concerns of workforce transition and readiness. Agencies may play a larger role in shaping AI policy through funding programs, research initiatives, and public‑private partnerships. For technology companies and other firms adopting AI, these initiatives could expand the pipeline of AI-skilled workers and encourage deeper collaboration with universities and workforce programs.

Clients deploying AI internally should anticipate growing expectations—from policymakers and stakeholders—that they invest in workforce reskilling and responsible adoption practices to ensure employees can participate in, rather than be displaced by, AI-driven productivity gains.

Section VII: Preempting State Laws Within a Federal Framework

The Framework expressly disclaims preemption of zoning (where to place data centers), state AI procurement decisions, and "traditional police powers"—which include "laws to protect children, prevent fraud, and protect consumers." These carve-outs broadly follow Section 8 of the December 2025 Executive Order, which stated that child safety, zoning, procurement, and "other topics as shall be determined" should not be preempted.

Recognizing those carve-outs, the Framework nevertheless does propose preemption of state laws that: (1) would regulate "AI development"; (2) would "unduly burden" the use of AI "for activity that would be lawful if performed without AI"; and (3) would "penalize AI developers for a third party's unlawful conduct involving their models."

The Framework gives a reason for point (1): AI "is an inherently interstate phenomenon." This is certainly fair enough, so state laws trying to prevent developers from creating and training AI models would be preempted. That said, there may be some debate in individual cases about what, specifically, would count as AI "development".

By contrast, the scope of intended preemption with points (2) and (3) is far from clear. As to point (2)—unduly burdening the use of AI for otherwise lawful activities—it is lawful, for example, for businesses to communicate with customers without AI; does the Framework mean to suggest that it would be an undue burden for a state to require a business to disclose that a customer is speaking with an AI chatbot, as California requires? As another example, there is nothing inherently unlawful about bank employees accepting or declining a loan application. Is point (2) targeting state and local regulation of automated decisionmaking technology? Sorting this out will have to await specific proposed legislative language.

Perhaps most interesting is point (3), which would preempt state laws that would subject AI developers for liability for the "unlawful conduct" of third parties—presumably, AI users—involving the developer's AI model. This seems to conflict with the Framework's disavowing an intention to preempt "traditional state police powers." The conflict arises because traditional state consumer protection law—part of a state's "traditional police powers"—includes product liability law, which generally holds manufacturers liable for harm from design defects that occur in the context of "foreseeable misuse."

For example, speeding is illegal, but everyone knows that people speed. If a manufacturer designed brakes that didn't work when the car was going faster than the speed limit, that would hardly be a valid defense against a design defect claim arising from the nonfunctional brakes. While there aren't yet any decided cases imposing liability on AI developers on a design defect theory, some prominent (but settled) cases—notably Garcia v. Character Technologies Inc., where a mother sued an AI chatbot platform when her teenage son took his own life after interacting with an AI—have included design defect claims. Does the Framework intend to preempt applying state product liability law to AI? As another example, it is a violation of federal criminal law to create, possess, or distribute child sex abuse material (CSAM). Is the "unlawful conduct of third parties" proposal intended to protect AI developers from liability if they fail to put guardrails into their models to prevent the creation of CSAM? Converting this proposed preemption into workable legislative language will likely be controversial.

Anticipated Next Steps

Release of the Framework satisfies a directive in the December EO that senior White House officials develop legislative recommendations, but it is unlikely to lead to near-term action.

The December EO also directed the Department of Commerce to publish an analysis of "onerous" state laws that would be the basis for potential preemption litigation; it directed the FTC to issue guidance on when state AI laws affecting AI outputs might be preempted by the federal ban on unfair or deceptive practices; and it directed the FCC to initiate a proceeding to preempt state laws regarding AI reporting and disclosures. None of those actions have yet occurred, although they may in the near future. If and when they do, we will provide further updates.

Conclusion

The Framework reflects a collection of Administration ideas about how Congress and the States should regulate—and not regulate—AI. It acknowledges both harms and benefits from AI development and deployment. It avoids taking a position on the disputes between AI developers and copyright holders—arguably the most actively contested issue in this sector. And, with its high-level presentation and coming in an election year, nothing in the Framework is likely to lead to actual legislation any time soon.

All that said, the Framework is a useful snapshot of the Administration's current thinking on AI regulation, and some points are clear: the Administration wants to encourage AI development; it does not want an overarching national regulatory system; it wants to find ways to protect children from AI-enabled dangers; and it wants to respect traditional state powers as much as it can. If little of this is new as compared to what the Administration said last summer, at least we know that little has changed—which is itself noteworthy.

+++

Our multidisciplinary AI team helps clients address the regulatory, compliance, governance, privacy, security, transactional, policy, product counseling, and IP issues stemming from the adoption of AI/ML applications. For more insights, contact the co-authors or another member of our artificial intelligence team and sign up for our alerts.

Explore all of our New Administration Outlook updates and webinars