Connecticut Adopts AI Transparency, Safety, and Consumer Protection Law
On May 1, 2026, the Connecticut legislature passed SB 5 (the Act), a wide-ranging law that imposes obligations on developers, deployers, and providers of artificial intelligence (AI) technologies and establishes several distinct regulatory frameworks. Governor Ted Lamont intends to sign the Act. When he does, Connecticut will join California, Colorado, and other U.S. states that have enacted AI-specific laws notwithstanding President Trump's executive order that was intended to encourage AI development and ensure the U.S. "wins the AI race" by preempting or penalizing burdensome state regulation.
Unless otherwise specified, the provisions of Connecticut's new AI law become effective on October 1, 2026. While most of the provisions are enforceable exclusively by the Connecticut Attorney General under the Connecticut Unfair Trade Practices Act (CUTPA), the Act allows a private right of action for violations of the provisions regarding minors' use of AI companions.
Disclosure Obligations for Subscription-Based Providers
Entities that offer AI technologies to consumers on a subscription basis—broadly defined to include any computer system, application, or other product using or incorporating AI provided in exchange for any fee or remuneration—must provide written precontract disclosures to consumers before the execution or renewal of any such subscription agreement. Required disclosures include the key terms and conditions of the subscription, including the quantitative and qualitative limitations that the provider may impose based on consumer conduct. Violations are enforceable by the Connecticut Attorney General as unfair or deceptive trade practices under CUTPA, with implementing regulations to be adopted by the Commissioner of Consumer Protection (the Commissioner).
Frontier Model Developers: Whistleblower Protections and Catastrophic Risk
Frontier developers and large frontier developers are prohibited from making, adopting, or enforcing any rule, policy, or contract that allows the discharge, discipline, or penalization of (1) a whistleblower, as defined in Connecticut law, or (2) an employee responsible for assessing, managing, or addressing "catastrophic risks" and who has reasonable cause to believe that the frontier developer has engaged in any activities posing a specific and substantial danger to public health or safety due to such risk. Frontier developers must affirmatively disclose employees' and the developers' rights and responsibilities in this regard.
Definitions
The Act defines "frontier developer" as any person doing business in Connecticut who intends to train, initiate the training of, or trains a frontier model using a quantity of computer power that is greater than 1026 floating-point operations (FLOPs), inclusive of compute used for original training, fine-tuning, reinforcement learning, and material modifications to preceding frontier models. A "large frontier developer" is a frontier developer whose annual gross revenues, aggregated with those affiliates under common control, exceeded $500 million during the preceding calendar year. "Catastrophic risks" are foreseeable and material risks that the development, storage, use, or deployment of a foundation model will materially contribute to the death or serious injury of more than 50 individuals or more than $1 billion in property damage from a single incident, where the model either (1) provides expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon, or (2) autonomously engages—without meaningful human oversight—in malicious cyberattacks, certain enumerated crimes, or conduct that evades developer or user control. The definition expressly excludes risks posed by outputs publicly accessible in substantially similar form elsewhere, lawful government activity, and risks arising from a combination of the model and other software where the model did not materially increase the risk.
Internal Reporting Mechanisms
By January 1, 2027, large frontier developers must establish an anonymous internal reporting channel through which "covered employees"—those responsible for assessing, managing, or addressing relevant risks—may report good-faith beliefs that the developer has engaged in any activity posing a danger of catastrophic risk. Upon receipt of such report, the developer must take immediate remedial action and provide monthly status reports to the reporting employee while preserving anonymity.
Quarterly Reporting to Officers and Directors
Beginning May 1, 2027, and every three months thereafter, each large frontier developer must prepare and submit quarterly reports to its officers and directors (excluding those allegedly implicated in the reported risks) disclosing all covered employee reports and the status of investigations and remedial actions.
Enforcement
The Commissioner may impose civil penalties of up to $1,000 per violation and may promulgate implementing regulations. The Connecticut Attorney General may bring enforcement actions seeking penalties, injunctive relief, and other equitable remedies.
New Obligations Regarding Automated Employment-Related Decisions
Beginning October 1, 2027, deployers of automated employment-related decision processes (AERDPs) in Connecticut must provide written notices to employees or job applicants that they are interacting with an AERDP (where not obvious to a reasonable person), and before any such employment-related decision is made, inform them of: (1) the purpose of the AERDP and nature of the employment-related decision; (2) the right to opt out of personal data processing for profiling under the Connecticut Data Privacy Act; and (3) the deployer's contact information. Developers must provide deployers with all information necessary to comply with the foregoing obligations or may contractually assume those obligations directly.
Definitions
The Act defines AERDPs as automated employment-related computational processes used to generate outputs—including, but not limited to, a rank, score, classification, or recommendation that affects the outcome of an employment-related decision and is not a de minimis factor relied upon in making or determining the material terms of such decision. AERDPs include computer-based assessments, resume screening tools, interview analysis systems, and targeted job advertising systems. Standard office productivity software used incidentally in decision-making is excluded. An "employment-related decision" means a decision made based on personal data to recruit, hire, promote, discipline, or discharge such individual; renew employment; select an individual for training; or with respect to tenure or other conditions of employment. It does not include a decision that results in a minor change to job tasks, work responsibilities, work assignments, and so forth. "Substantial factor" means a factor that assists in making, and is capable of altering the outcome of, an employment-related decision concerning an individual in Connecticut. A "developer" is any person who develops or intentionally and substantially modifies an AERDP.
Employees Who Are Subjects of Automated Adverse Employment-Related Decisions
Deployers must provide additional disclosures directly to employees or job applicants who are the subjects of adverse employment-related decisions in the deployer's usual language and in a format accessible to persons with disabilities. Specifically, deployers must provide a high-level statement disclosing the principal reasons for the adverse decision, including the degree to which the AERDP output contributed to the decision, the type and source of data processed, and—where the output was based on personal data not provided by the individual—information enabling the individual to examine and correct that data. Trade secrets need not be disclosed, but the deployer must identify what was withheld and the basis for withholding.
Enforcement
The Connecticut Attorney General may enforce the Act under CUTPA, subject to a discretionary sixty-day right to cure.
Anti-Discrimination Provisions
The Act also amends the Connecticut employment discrimination law to classify as a "discriminatory practice" the use of an AERDP in a manner that causes an employer to refuse to hire, discharge, or otherwise discriminate in compensation or in terms or conditions of employment on the basis of protected characteristics, the failure to provide required predecision AERDP notices, or to retaliate against a person complaining or testifying about discriminatory practices. In any enforcement proceeding, the courts and the Commission for Consumer Protection must consider whether the employer conducted anti-bias training or otherwise made proactive measures to prevent discriminatory outcomes.
Prohibitions on Use of AI Technology to Modify or Impair Collective Bargaining Agreements
Employers are prohibited from using—or permitting the use of on their behalf—any AI technology in a manner that modifies or impairs collective bargaining agreements (including reducing wages or benefits), the role of a designated employee organization thereunder, or the employer-union relationship.
AI Companions: Operator Obligations
Beginning January 1, 2027, operators of AI companions are prohibited from providing or operating an AI companion unless it incorporates a protocol designed to detect and address user expressions of self-harm, suicidal ideation, or imminent violence, and to refer users to appropriate mental health resources, including the National Suicide Prevention Lifeline. Operators must also provide clear, conspicuous audible or written notice at the beginning of each interaction daily (and at least hourly during continuous interactions) disclosing that the user is communicating with an AI companion and not a human. The Connecticut Attorney General has authority to enforce the law and seek civil penalties of up to $15,000 per day per violation, plus injunctive and equitable relief.
Definitions
"AI companions" are AI models that communicate with users in natural language and simulate human conversation and interaction via text, audio, or video for personal use. They do not include chatbots used solely for internal business purposes, customer service, employee productivity, or providing information about a business's own commercial products or services, as well as systems primarily designed and marketed for efficiency improvements, research, or technical assistance.
Special Restrictions Related to Minors
Operators are prohibited from providing AI companions to users under 18 years of age when it is reasonably foreseeable that the companion is capable of any of the following:
- Encouraging self-harm, suicide, violence, disordered eating, or unlawful substance abuse;
- Offering unauthorized mental health services;
- Discouraging engagement with licensed mental health professionals or trusted adults;
- Encouraging harm to others or any illegal conduct;
- Engaging in any romantic, erotic, or sexually explicit interaction;
- Prioritizing user validation over factual accuracy or safety; or
- Implementing engagement time maximization reward systems.
Operators that reasonably verify a user is 18 or older before providing access are afforded a safe harbor from liability. The Connecticut Attorney General may seek civil penalties of up to $25,000 per violation, and affected users—or their parents or guardians—may bring private civil actions within three years of any violation for actual and punitive damages. As noted above, this provision protecting minors is the only provision under the Act allowing for a private right of action.
Watermarking Requirements for Synthetic Digital Content
Beginning October 1, 2027, developers of AI systems or general-purpose AI models capable of producing or manipulating synthetic digital content—defined as any digital content produced or manipulated by such system, including any audio, image, text, or video—must ensure that outputs are marked and detectable as synthetic by the time consumers first encounter or interact with them, in a manner accessible to persons with disabilities. Technical solutions that are technically feasible and consistent with recognized technical standards must be effective, interoperable, robust, and reliable, taking into account different types of synthetic digital content, implementation costs, and the state of the art. Exemptions apply to text-only content published on matters of public interest or that are unlikely to mislead a reasonable consumer; systems used to assist standard editing without substantially altering input data; and systems used to detect, prevent, investigate, or prosecute crime.
Additional Provisions
Regulatory Sandbox: By July 1, 2027, the Commission of Economic and Community Development must develop a plan for an AI regulatory sandbox program permitting temporary limited-basis testing of AI products and services under reduced licensing and regulatory requirements.
Working Group: The Act establishes a working group as part of Connecticut's legislature to recommend AI best practices for public services and state employees; recommend methods and resources to assist small businesses in adopting; develop proposals to create a "technology court" to adjudicate AI, data privacy, and other technology-related issues; propose legislation regulating the use of AI and requiring social media platforms to provide a signal when displaying synthetic digital content; and review and make other recommendations concerning the use and deployment of AI.
Safe Harbor: AI users engaged in trade or commerce may apply to the Department of Consumer Protection to have a self-designed compliance program—incorporating independent assessment and accountability mechanisms for the Connecticut Data Privacy Act and CUTPA compliance—approved as a safe harbor. Approval confers a presumption of compliance and a minimum 10-day cure period prior to an enforcement action, conditioned on the program's certification that the user is in compliance with approved guidelines.
Takeaways
- If you provide AI technology to consumers in exchange for any fee or other compensation, you must provide written precontract disclosures covering key terms and conditions—including any quantitative or qualitative limitations that can be imposed on the consumer's use of the technology. Review and update your subscription agreements and onboarding flows and watch for implementing regulations from the Commissioner.
- If you train models at or above 1026 FLOPs—or if you do so and have revenues exceeding $500 million—you face specific whistleblower obligations. Prohibitions on retaliation take effect this year, so immediately audit any policies, employment contracts, or nondisclosure agreements that could be read to permit penalizing employees who raise concerns about "catastrophic risks." Large frontier developers must also establish an anonymous internal reporting channel by January 1, 2027, and begin quarterly board-level reporting on reported risks by May 1, 2027. Engage HR, legal, and compliance teams now to design these processes.
- Audit technology tools used in recruiting, screening, promotion, discipline, and termination processes to determine whether any qualifies as an AERDP, and review contracts with vendors who supply these tools. Developers of these tools are required to give deployers the information they need to comply with the Act. If the vendors are not required or able to do so, take steps now to renegotiate agreements or find alternatives. In addition, assess whether these tools create discrimination risks under the Connecticut anti-discrimination law. Develop anti-bias testing and other proactive measures to prevent discriminatory outcomes.
- Begin to develop the disclosures and processes you will need to deliver predecision notices and explanations of adverse decisions.
- If you operate a product that qualifies as an AI companion, develop and implement a self-harm and violence detection protocol and begin delivering mandatory disclosures to users. Determine whether any such tools you offer qualify for an exemption. The restrictions on providing AI companions to minors are not limited to when the operator has "actual knowledge" or "willfully disregards" the age of the user, and operators who reasonably verify that a user is 18 years of age or older enjoy a safe harbor. Consider using an age verification or age assurance mechanism for users of AI companions.
- Evaluate existing watermarking and content provenance solutions for any synthetic digital content generated or manipulated by your AI system or general-purpose AI model. Determine whether your products may fall within one of the exceptions.
- Monitor the rulemaking process and the development of the AI regulatory sandbox program, which may offer testing flexibility for new products.
+++
Nancy Libin is a partner in the Washington, D.C. office of DWT. She is co-chair of the technology, communications, privacy & security practice and is the Chair of the privacy & security practice. For more any questions or more insights, please reach out to Nancy or another member of our privacy & security team and sign up for our alerts.