Update: The FTC’s deadline for public comment is April 30, 2024.

On February 15, the FTC released a notice seeking comment on a proposed rule that could create potential liability for generative AI developers. Specifically, the agency is requesting comments on a new rule that would prohibit as an unfair and deceptive practice under Section 5 of the FTC Act the fraudulent impersonation of individuals, and extend potential liability for companies providing the "means and instrumentalities" of such fraudulent activity. The proposed rules extend similar protections for individuals to expand on the agency's recently adopted rules prohibiting the impersonation of government and business entities and their officials or agents.

More significantly, the proposed rule would extend potential liability for impersonation fraud to companies providing goods or services (the "means and instrumentalities" for others to engage in impersonation fraud) when the provider has knowledge or "reason to know" that the AI will be used by bad actors to "materially and falsely pose as" an individual in connection with commerce. For example, a developer of generative AI could face potential FTC enforcement action under this rule if the developer knew or had reason to know that a third party will use the genAI system (or its output) in a fraudulent scheme to pose as an individual, or materially misrepresent an affiliation with, or endorsement or sponsorship by the impersonated individual. In its release, the FTC explained it is seeking comment on whether the rule should make it unlawful for a firm, "such as an AI platform that creates images, video, or text, to provide goods or services that they know or have reason to know is being used to harm consumers through impersonation."

Examples of potentially fraudulent practices under the proposed rule

  • Calling, messaging, or otherwise contacting a person or entity while posing as an individual (or affiliate), including by identifying an individual by name or by implication;
  • Sending physical mail through any carrier using addresses, identifying information, or insignia or likeness of an individual;
  • Creating a website or other electronic service or social media account impersonating the name, identifying information, or insignia or likeness of an individual;
  • Creating or spoofing an email address using the name of an individual;
  • Placing advertisements, including dating profiles or personal advertisements, that pose as an individual or affiliate of an individual; and
  • Using an individual's identifying information, including likeness or insignia, on a letterhead, website, email, or other physical or digital place.

"Strengthening the FTC's toolkit"

This proposal is being driven, in part, by the FTC's concerns that deepfakes, voice cloning, and other AI-generated content is facilitating an increase in fraudulent activity. As the FTC noted in its release to the proposed rule, fraudsters are increasingly using AI tools "to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever. Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC's toolkit to address AI-enabled scams impersonating individuals."

Looking Ahead

The comment deadline will be set once the proposed rulemaking is published in the Federal Register. It will take some time for the agency to receive comments, develop a record, and issue a final rule.

DWT's AI team regularly advises clients on the rapidly evolving AI regulatory landscape. We will continue to closely monitor key developments with potential impacts for clients at federal, state, and local levels.