New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act ("RAISE Act"), which established the nation's first comprehensive reporting and safety governance regime for "frontier" AI model developers. The RAISE Act had originally passed in June, but the version signed on December 19, 2025 has now been modified to be more like California's AI law addressing the same issues. Although Governor Hochul is reported to have signed the original version of the bill passed in June, lawmakers have agreed to approve her final bill changes after returning to session in Albany, NY after the first of the new year. The Act's passage highlights the burgeoning conflict between state and federal AI regulation. It follows an Executive Order titled "Ensuring a National Policy Framework for Artificial Intelligence ("AI Executive Order"), signed by President Trump on December 11, 2025. The AI Executive Order directs federal agencies to challenge state AI laws deemed to impede a "minimally burdensome national standard" for AI so there will not be "50 discordant State ones." (See DWT's alert on this EO here). It appears the effective date of the RAISE Act will be January 1, 2027.

To Whom the RAISE Act Applies: The "Large Developer" Threshold and "Frontier Models"

The RAISE Act does not apply to all developers of AI technology. It targets large developers ("Large Developers") who create frontier models ("Frontier Models") using a measure of computational power and financial investment during a model's training phase. An entity is a Large Developer if it trains a Frontier Model based on the following criteria:

  • Financial Threshold: The developer has spent more than $100 million in aggregate computing costs to train frontier models.
  • Computational Threshold: The developer trained the model using more than 1026 computational operations (floating point operations per second, or "FLOPs").
  • Knowledge Distillation: The developer spent at least $5 million to "distill" or extract capabilities from an existing frontier model and place those capabilities into a smaller one. Such models are also considered Frontier Models.
  • Market Presence: The developer makes these models available to residents of New York.

Exclusions: The Act provides narrow carve-outs for state agencies and academic institutions who are engaging in academic research that is not subsequently assigned to another party.

Summary of Key Requirements

  • Transparency Requirements and Public Safety Protocols. Large Developers must develop, maintain, and publicly disclose a comprehensive safety and security protocol that includes reasonable administrative, technical, and physical cybersecurity protections, mitigation strategies for "critical harms" (those resulting in the death or serious injury of 100 or more people or at least one billion dollars of damage) that present known or pose unacceptable risks, and internal testing and risk-assessment procedures, among other things.
  • 72-Hour Incident Reporting. Large Developers must report to the State within 72 hours after determining that a "safety incident" has occurred. A "safety incident" includes events that create a demonstrable increased risk of "critical harm."
  • Annual Review. Large Developers must conduct an annual review of the safety and security protocol, accounting for any changes to the Frontier Models or industry's best practices.
  • Document Retention and Government Audit. Large Developers must maintain documentation about the safety and security protocol for five years and make unredacted versions available to the Division of Homeland Security and Emergency Services and the Attorney General.
  • Independent Audit. Large Developers must retain an independent third party to conduct an annual audit of their safety and security protocols. The audit must include a summary report (1) retained by the Large Developer for the life of the Frontier Model plus five years, (2) publicly published (with appropriate redactions) and (3) submitted to the Division of Homeland Security and Emergency Services.
  • Regulatory Oversight and Penalties. The Act establishes a new oversight office within the New York Department of Financial Services ("NYDFS") that will "ensure AI frontier model transparency," be tasked with monitoring compliance, and be given rule-making authority. Relatedly, NYDFS currently oversees cybersecurity requirements for financial institutions subject to the NYDFS Cybersecurity Regulation (See DWT's analysis of that regulation here). The Act authorizes significant civil penalties for failures to report or for providing inaccurate information, with fines starting at $1 million for initial violations and escalating to $3 million for repeat offenses. Prior versions set fines starting at $10 million for initial violations and $30 million for subsequent violations, but the negotiated bill sets the fines at $1 million and $3 million respectively. It was reported that Governor Hochul preferred a flat $1 million penalty.
  • Comparison with California. While the RAISE Act was significantly amended at the last minute to align more closely with California's recently enacted Transparency in Frontier Artificial Intelligence Act (SB 53), critical differences remain—particularly regarding reporting speed and definitions of "harm."

Feature

New York RAISE Act (S6953B/A6453B)

California SB 53 (TFAIA)

Primary Scope

Large Developers ($100M+ compute spending).

Large Frontier Developers ($500M+ annual revenue).

Compute Threshold

1026 FLOPs.

1026 FLOPs (includes fine tuning).

Incident Reporting

72 hours after determination.

15 days after discovery (24h if imminent death).

Definition of Harm

Critical Harm: 100+ deaths or $1B in damages.

Catastrophic Risk: 50+ deaths or $1B in damages.

Enforcement Body

NYDFS (new Office of AI Transparency).

Cal OES (Office of Emergency Services).

Max Civil Penalties

Up to $1M per violation, escalating to $3M for repeat violations.

Up to $1M per violation.

Infrastructure

Supports Empire AI consortium.

Creates CalCompute public cloud.

Academia

Explicitly exempts universities for academic research.

No broad explicit university exemption.

The Preemption Conundrum: New York vs. the Federal Government

This is the first state law to be signed since President Trump signed the AI Executive Order, which suggests several avenues for attacking laws like the RAISE Act, including tasking the Federal Trade Commission ("FTC") to challenge state laws as interfering with interstate commerce, preempted by federal regulation, and violating the First Amendment and other constitutional provisions. These avenues are as follows:

  • The "Truthful Output" Argument: The AI Executive Order directs the FTC to investigate whether state laws—specifically those banning "differential treatment" or bias—force AI models to produce "false results" to avoid disparate impacts on protected groups.
  • First Amendment Challenges: The federal government may argue that the RAISE Act's disclosure and framework requirements constitute "compelled speech," forcing developers to adopt specific "ideological" safety standards set forth by a state.
  • Dormant Commerce Clause: Federal agencies are directed to challenge state laws that create a "patchwork" of AI regulations, arguing that complying with New York's unique standards effectively dictates how a developer must manage its model globally.

How Legal and Compliance Teams Can Prepare

  • Conduct a Coverage Assessment: Determine if your current AI model training or fine-tuning investments exceed New York's $100M / 1026 FLOPs thresholds.
  • Map State-Law Exposure to the AI Executive Order Criteria: Inventory your state-level obligations—particularly those regarding bias and reporting—to anticipate which laws may be targeted by the Department of Justice's "AI Litigation Task Force".
  • Refine Incident Response: Ensure internal workflows can differentiate between a technical bug and a " safety incident" to meet the New York 72-hour filing deadline while maintaining a consistent narrative for federal regulators.
  • Monitor Federal Rulemaking: Monitor upcoming Federal Communications Commission ("FCC") and FTC proceedings regarding AI reporting standards, as these agencies will likely be at the forefront with DOJ in seeking to preempt state laws.

The RAISE Act is a significant expansion of state power over AI, but its long-term viability is uncertain. While New York asserts its right to protect New Yorkers from "frontier" risks, the federal government is attempting to preempt all state authority over AI regulation. Clients should prepare for a period of prolonged regulatory ambiguity, characterized by multi-front litigation, potential fines, and the prospect that withholding federal funding may be used as leverage against states that adopt or enforce their AI regulations. 

Wendy Kearns, Apurva Dharia, and Andrew Lewis are legal thought leaders who work on technology, privacy, corporate, and other matters, offering forward-thinking strategies for today's legal challenges. For more insights, contact Wendy, Apurva, Andrew, or another member of Davis Wright Tremaine's Technology + Privacy & Security team or sign up for our alerts.