NY Overhauls Transparency and Governance Requirements for Frontier AI Developers
New York Governor Kathy Hochul recently signed Senate Bill 8828, which overhauls the Responsible AI Safety and Education Act (RAISE Act). When Gov. Hochul signed the RAISE Act in December 2025, she issued an accompanying memorandum stating that "[t]he bill, as drafted, would impose broad compliance obligations on large-scale models without adequate specificity," and that she had reached an agreement with the legislature to make certain clarifications to the bill. These amendments to the RAISE Act, which will become effective on January 1, 2027, address Gov. Hochul's concerns by aligning the RAISE Act with California's Transparency In Frontier Artificial Intelligence Act (CA TFAIA, which we discussed in a prior blog post). Gov. Hochul signed the amendments on March 27, 2026.
The amended RAISE Act grants broad rulemaking and enforcement authority to the New York Department of Financial Services (NYDFS). NYDFS has been at the forefront of cybersecurity regulation and enforcement for almost a decade, and it may seek to take a similar lead role in the AI space using its RAISE Act authorities.
The amended RAISE Act, as well as CA TFAIA, could run afoul of federal efforts to limit state regulation of AI. As we discussed in a prior blog post, President Trump's Executive Order titled "Ensuring a National Policy Framework for Artificial Intelligence" directs federal agencies to challenge state AI laws deemed to impede a "minimally burdensome national standard" for AI regulation. The New York and California laws could be prime targets for such a challenge. President Trump's recent National Policy Framework for AI (covered in more detail by DWT here) similarly calls for preemption of state AI laws, including those that regulate "AI development."
We analyze the RAISE Act amendments and their implications below.
Applicability of the RAISE Act Amendments
Subject to limited exceptions, the RAISE Act amendments borrow from CA TFAIA by defining two types of covered AI developers:
- "Frontier Developers" – those that have trained or initiated the training of a Frontier Model.
- "Large Frontier Developers" – Frontier Developers with revenue in the previous year of $500 million or greater.
Companies that only use, deploy, license, host, or build applications on top of AI models developed by others—including those accessing models through APIs or third‑party services—are not Frontier Developers.
A "Frontier Model" is a foundation model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations (FLOPs). The law applies to Frontier Developers and Large Frontier Developers who make their models available in New York.
Key Provisions Applicable to All Frontier Developers
All Frontier Developers, which includes all Large Frontier Developers, must comply with the following under the amended RAISE Act:
- Transparency Report: Before or at the time of deploying a new Frontier Model or a substantially modified version of an existing Frontier Model, a Frontier Developer must clearly publish a transparency report on its website. The report must include: (a) the internet website of the Frontier Developer; (b) a mechanism enabling a natural person to communicate with the Frontier Developer; (c) the Frontier Model's release date; (d) supported languages; (e) supported output modalities; (f) intended uses; and (g) any generally applicable restrictions or conditions on use.
- Prohibition on False or Misleading Statements: Frontier Developers are prohibited from making a materially false or misleading statement about Catastrophic Risk from its Frontier Models or its management of "Catastrophic Risk." A Large Frontier Developer is prohibited from making a materially false or misleading statement about its implementation of, or compliance with, its Frontier AI Framework (described below).
- "Catastrophic Risk" means a foreseeable and material risk that a Frontier Developer's development, storage, use, or deployment of a Frontier Model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident involving a Frontier Model doing any of the following: (i) providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear (CBRN) weapon; (ii) engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense; or (iii) evading the control of its Frontier Developer or user. Catastrophic Risk does not include a foreseeable and material risk from: information that a Frontier Model outputs if the information is otherwise publicly accessible in a substantially similar form from a source other than a foundation model; lawful activity of the federal government; or harm caused by a Frontier Model in combination with other software if the Frontier Model did not materially contribute to the harm.
- Redactions: Frontier Developers are permitted to redact information from documents published to comply with these transparency requirements when necessary to protect trade secrets, cybersecurity, public safety, national security, or comply with law, but the developer is required to describe the nature and justification of the redactions where possible and retain the unredacted information for five years.
- Reporting: Frontier Developers must report to a newly created office within the NYDFS within 72 hours after determining that a "Critical Safety Incident" pertaining to one or more of its Frontier Models has occurred or within 72 hours of the Frontier Developer learning facts sufficient to establish a reasonable belief that a Critical Safety Incident has occurred. Note that this 72-hour window differs significantly from CA TFAIA's 15-day window. NYDFS is required to create a reporting mechanism for Frontier Developers or a member of the public to report Critical Safety Incidents.
- "Critical Safety Incident" means any of the following: (a) unauthorized access to, modification of, or exfiltration of, the model weights of a Frontier Model that results in death or bodily injury; (b) harm resulting from the materialization of a catastrophic risk; (c) loss of control of a Frontier Model causing death or bodily injury; or (d) a Frontier Model that uses deceptive techniques against the Frontier Developer to subvert the controls or monitoring of its Frontier Developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.
- A Frontier Developer that discovers a Critical Safety Incident posing an imminent risk of death or serious physical injury must disclose the incident within 24 hours to an appropriate authority, including a law enforcement or public safety agency with jurisdiction, as required by law. If the Frontier Developer later discovers additional information about the Critical Safety Incident after filing the initial report, it may submit an amended report. The amended RAISE Act authorizes NYDFS to transmit reports of Critical Safety Incidents to other governmental entities at its discretion, including to the attorney general, as appropriate.
- Beginning January 1, 2028, and annually thereafter, NYDFS will produce a report to be shared with the governor and leadership within the state legislature containing (a) anonymized and aggregated information about Critical Safety Incidents reviewed since the prior report; (b) any information NYDFS considers relevant to Frontier Model safety; (c) any recommended updates to the article; and (d) other developments relevant to the article's purposes.
- Compliance With Federal Law or Guidance: Similar to CA TFAIA, the amended RAISE Act permits NYDFS to designate certain federal laws, regulations, or guidance that impose Critical Safety Incident reporting standards that are substantially equivalent to or stricter than those required under the RAISE Act and are intended to assess, detect, or mitigate Catastrophic Risk. A Frontier Developer may elect to comply with these designated federal standards instead of the state reporting requirements by notifying NYDFS and will be deemed in compliance so long as it meets those federal requirements, though it must concurrently provide NYDFS with copies of any Critical Safety Incident reports submitted to federal authorities. Failure to comply with the designated federal standards will constitute a violation of the RAISE Act.
- Regulatory Oversight and Penalties: As discussed, the RAISE Act amendments establish a new oversight office within the NYDFS that will "ensure AI Frontier Model transparency" be tasked with monitoring compliance, and be given rule-making authority. The amendments broadly authorize NYDFS to adopt rules and regulations to implement the provisions of the RAISE Act "as needed." This stands in contrast to CA TFAIA, which does not grant broad rulemaking authority to the state agency responsible for overseeing the law. The RAISE Act amendments authorize significant civil penalties for failures to report or for providing inaccurate information, with fines starting at $1 million for an initial violation, and up to $3 million for subsequent violations.
Additional Requirements for Large Frontier Developers
The following additional requirements apply only to Large Frontier Developers (not to all Frontier Developers):
- Publish a Frontier AI Framework: Large Frontier Developers must create, implement, comply with, and clearly and conspicuously publish on its website a Frontier AI Framework describing how it manages, assesses, mitigates, and safeguards against "Catastrophic Risks" and potential "Critical Safety Incidents" associated with its Frontier Models. The framework must explain how a Large Frontier Developer: incorporates national and international standards and industry best practices; defines and assesses thresholds for identifying capabilities that could pose Catastrophic Risks; applies and reviews mitigations before deployment or extensive internal use; and uses third parties to evaluate Catastrophic Risks and the effectiveness of safeguards. It must also address cybersecurity protections for unreleased model weights, processes for identifying and responding to Critical Safety Incidents, internal governance practices to ensure implementation, criteria for updating the framework and determining when Frontier Models are substantially modified, and procedures for assessing and managing the potential for Catastrophic Risks arising from the Large Frontier Developer's internal use of its Frontier Models.
- Annual Review: The Frontier AI Frameworks must be reviewed and updated, as appropriate, at least annually. If the Large Frontier Developer makes a material modification to its Frontier AI Framework, it must clearly and conspicuously publish the modified Frontier AI Framework and a justification for that modification within 30 days.
- Additional Transparency Report Requirements: In addition to the requirements for a Frontier Developer, a Large Frontier Developer must include summaries of: (a) assessments of Catastrophic Risks conducted pursuant to its Frontier AI Framework; (b) the results of those assessments; (c) the extent of third‑party evaluator involvement; and (d) other steps taken to fulfill the requirements of the Frontier AI Framework with respect to the Frontier Model.
- Additional Reporting Requirements: A Large Frontier Developer must submit to NYDFS every three months, or on another mutually agreed written schedule, a summary of any assessment of Catastrophic Risk resulting from the internal use of its Frontier Models. NYDFS is charged with establishing a secure mechanism for Large Frontier Developers to confidentially submit these summaries.
- Large Frontier Developer Disclosure: Large Frontier Developers must file and maintain a current disclosure statement with NYDFS before developing, deploying, or operating a Frontier Model, in whole or in part, in New York State, and must pay a pro rata share of the operating expenses of the NYDFS oversight office. The disclosure statement must be filed in a form prescribed by NYDFS and renewed every two years or upon ownership transfer or material changes. It must identify the Large Frontier Developer's business names, principal and New York office addresses, certain beneficial owners depending on whether the company is privately held or publicly traded, and designated points of contact for government inquiries. NYDFS may impose civil penalties, including $1,000 per day for failing to file or correct false information and recovery of unpaid assessments, and will maintain and publish a list of Large Frontier Developers that have filed disclosure statements, excluding their contact information.
Key Actions for Legal and Compliance Teams
Legal and compliance teams should consider the following steps to assess the amended RAISE Act’s applicability to their organizations and to prepare to meet the law’s requirements.
- Conduct an Applicability Assessment: Many organizations use Frontier Models but are not Frontier Developers. An organization that uses Frontier Models should determine if it is a Frontier Developer—i.e., whether its current training of Frontier Models exceeds or is intended to exceed New York's 1026 FLOPs threshold—and a Large Frontier Developer—i.e., whether it meets the definition of a Frontier Developer and meets or exceeds the gross annual revenue threshold of five hundred million dollars ($500,000,000) in the preceding calendar year, inclusive of affiliates.
- Conduct Risk Assessments: Frontier Developers should implement risk assessment procedures to identify whether model capabilities could materially contribute to Catastrophic Risks (e.g., enabling CBRN weapons creation, autonomous criminal activity, or loss of developer control).
- Prepare for Transparency and Framework Reports: Frontier Developers should consider developing required documentation and record‑retention procedures for transparency materials, including for: (i) justifiable redactions for trade secrets, cybersecurity, public safety, national security, or legal compliance; and (ii) retaining unredacted versions of documents for five years. For Large Frontier Developers, consider gathering materials to publish the required Frontier AI Framework and additional transparency report requirements.
- Consider an Agreement for Reduced Frequency of Catastrophic Risk Assessment Summaries: If your organization will be deemed a Large Frontier Developer, consider whether it is in your interest to secure a mutually agreed written schedule with NYDFS that will reduce the required frequency of summaries that must be submitted to NYDFS of any assessments of Catastrophic Risk resulting from the internal use of its Frontier Models.
- Establish Incident Detection and Response Systems: Establish systems to monitor for unauthorized access to model weights, loss of model control, or model behavior that attempts to circumvent safeguards.
- Refine Incident Response Processes: Ensure internal workflows can differentiate between a technical bug and a "Critical Safety Incident" to meet the New York 72-hour Office filing deadline (and potentially, the 24-hour law enforcement or public safety agency deadline).
- Monitor Federal Rulemaking: As discussed, the federal government has announced efforts to limit state laws regulating AI. The RAISE Act may be a prime target of such efforts. Frontier developers should monitor any federal efforts to preempt to strike down the RAISE Act and other state AI laws.
+++
Michael Borgia and Nancy Libin are partners in the Washington, D.C. office, Andrew Lewis is counsel in the San Francisco office, and Apurva Dharia is an associate in the Washington, D.C. office of DWT. For questions or more insights, reach out to the authors or another member of our artificial intelligence and technology + privacy & security teams and sign up for our alerts.