Skip to content
DWT logo
People Services Insights
About Offices Careers
Search
People
Services
Insights
About
Offices
Careers
Search
Insights
Media & Entertainment

The Generative Slate: As Digital Replicas Improve, Legal Issues Grow

Can the law and the entertainment industry keep up?
By   James Rosenfeld and Bianca G. Chamusco
03.19.26
Share
Print this page

This article is part of DWT's The Generative Slate series. It explores the use of generative AI in the production and distribution of content.

Life may imitate art, but AI does a pretty good job of imitating both. We have reached an inflection point in the capacity of generative AI to create convincing audiovisual (and audio, and visual) content. Many of these AI-created works don't (yet) look or sound completely real, but they look and sound real enough to threaten the premise that movies or concerts require actual human performance. Two examples from the entertainment world:

  • In a development that both tantalized and unnerved the entertainment industry, Irish filmmaker Ruairi Robinson typed a two-line prompt into ByteDance's Seedance 2.0 AI video generator and produced a 15-second clip of Tom Cruise and Brad Pitt in a hyper-realistic rooftop fistfight.
  • In the music industry, an AI company called Codible Ventures "voice-cloned" Arijit Singh—the most followed artist on Spotify (yes, more than Taylor Swift)—without his permission. It also used Singh's likeness in its advertising, misrepresenting his endorsement of or performance at a virtual event. Singh had to petition the Bombay High Court to obtain an injunction protecting his personality rights.

The Pitt/Cruise video triggered awe and unease among actors, studios, and others in the filmmaking world. Deepfakes seem to have crossed the line from a theoretical risk to a lived, professional anxiety for entertainment creators. While some denounced the clip as "slop," screenwriter Rhett Reese reposted the video on X and said he was "shook" by its quality and warned that careers across the industry are at risk. SAG-AFTRA issued a statement condemning the technology as a threat to performers' rights. Motion Picture Association chairman Charles Rivkin was equally critical.

AI-generated music mimicking real artists has sent similar ripples through the music industry. In June 2024, the Recording Industry Association of America sued AI music generators Suno and Udio for alleged mass infringement of copyrighted sound recordings. And just last month, a coalition of artist groups launched a "Say No to Suno" campaign, accusing the platform of flooding streaming services with AI-generated tracks that dilute royalty pools for the human artists whose work trained the models.[1]

Deepfakes are causing anxiety on other fronts too, outside of the entertainment world—particularly in sexually and politically exploitative contexts:

  • In January 2024, sexually explicit AI-generated images of Taylor Swift spread virally across social media, reaching tens of millions of viewers before being removed. The deepfakes sparked outrage and calls for new laws criminalizing deepfake porn.
  • That same month, an AI-generated robocall mimicking President Biden's voice urged New Hampshire Democrats not to vote in the state's primary, telling them to "save your vote for the November election." "Republicans have been trying to push nonpartisan and Democratic voters to participate in their primary. What a bunch of malarkey," the fake message said. The FCC ultimately assessed a $6 million fine against the political consultant responsible.

While the Pitt/Cruise and Singh examples raise concerns about labor substitution, economic harm, and intellectual property violations, these other use cases raise graver threats. Sexual deepfakes implicate personal dignity and bodily autonomy. Political deepfakes threaten the integrity of elections themselves.

Can the law permit creative uses but keep up with these risks? While courts are already wrestling with how to apply legacy legal frameworks to AI (particularly in the copyright sphere), it is becoming apparent that the old frameworks are not always a perfect fit. No matter which side one is on, it is indisputable that copyright law was not designed to adjudicate the legality of statistical learning from massive corpora, or to address outputs that are "new" in a formal sense but derivative in an economic one. Likewise, the right of publicity traditionally assumes a relatively discrete act of appropriation—using a person's name or likeness in a commercial context—not the synthesis of identity traits by a generative AI model. These legal doctrines can and will be stretched to cover some of these harms, but they certainly weren't developed with AI in mind, and courts are understandably cautious about extending them too far.

That imperfect fit has led to a wave of new statutes targeting deepfakes.[2] Legislatures are filling what they perceive as remedial gaps in current legal regimes. Rather than waiting for copyright or publicity law to evolve incrementally through litigation, they're creating bespoke causes of action and, in some cases, criminal. These statutes have, so far, primarily addressed use cases involving sexual content, election manipulation, and exploitation of celebrity performances. These developments reflect legislative judgments that generative AI presents qualitatively different risks justifying tailored regulation in these areas, although they may sometimes be in tension with First Amendment protections for creative uses of this type of content, as well as intermediary liability protections for platforms that provide user-generated content.

To complicate things further, deepfakes create problems within the legal system itself. Our legal system relies on the assumption that audiovisual evidence has some baseline level of reliability. As synthetic media become indistinguishable from real recordings, that assumption erodes. The law will adapt—it always does—but it will likely require new authentication norms, technical watermarking regimes or other technical solutions, and perhaps rebuttable presumptions about synthetic content—though any mandatory watermarking or disclosure regime will have to stand up to First Amendment scrutiny.

Advances in generative AI video and audio challenge the foundational evidentiary and substantive assumptions that underlie large swaths of the law. As courts apply old laws to new technologies, legislatures draft new laws for new risks. It will be a challenge, as always, for the law and courts to keep up with the rapidly changing technology, but more so here given the lightning speed of the advances and the ways in which new tech may affect the judicial process itself. People and companies involved in creating entertainment should:

  • Track and sample these technological developments. Be aware of the changes in technology and explore ways you might use generative AI to supplement and improve human-made creative works while respecting others' rights.
  • Protect IP. Register copyrights and trademarks. Be vigilant about identifying and preventing infringers. Whether using old or new laws, it may be necessary to pursue bad actors. Realize that enforcement may now constitute a larger portion of the legal budget.
  • Address AI rights in contracts. Studios, labels, and creators should review contracts to determine who controls the use of voices, likenesses, and performances in generative AI systems. New agreements may need explicit provisions governing voice cloning, digital replicas, and AI-assisted performances to avoid disputes over consent, compensation, and ownership.
  • Follow changes in the law. Be aware of how existing disputes, in copyright and other realms, are playing out. Keep up to date on the growing body of deepfake-specific legislation. Large players may want to engage with legislatures over their ongoing efforts to permit creative uses while deterring infringing ones.

None of us can predict how the technology or law will develop in this area with any certainty, but we can try to stay on top of both—new technologies and new laws—to protect our interests while the landscape changes.

+++

Jim Rosenfeld is a partner and Bianca Chamusco is an associate in the media and entertainment group in the New York and Seattle offices of DWT. For more insights, reach out to Jim, Bianca, or another member of our media & entertainment team and sign up for our alerts.

 


[1] Udio has since settled disputes and signed with UMG, Warner, and Merlin, and Suno has reached a licensing deal with Warner Music.

[2] See, e.g., TAKE IT DOWN Act, Pub. L. No. 119-12, 139 Stat. 55 (2025) (criminalizing distribution of nonconsensual intimate images, including AI-generated deepfakes, and requiring platforms to remove such content); see also NO FAKES Act, S. 1367, 119th Cong. (2025) (proposed federal right of publicity protecting individuals from unauthorized AI-generated replicas of their voice or likeness). At the state level, 47 states have enacted deepfake-related legislation since 2019, with 82% of those laws adopted in 2024 and 2025 alone. See Ballotpedia, Deepfake Legislation Tracker (2025). Representative examples include Tennessee's ELVIS Act, Tenn. Code Ann. § 47-25-1125 (2024) (protecting voice and likeness against AI replication); California's Defending Democracy from Deepfake Deception Act of 2024, Cal. Elec. Code § 20010 et seq. (regulating deepfakes in election communications); and Texas's True Source Statute, Tex. Elec. Code § 255.004 (criminalizing political deepfakes within 30 days of an election). DWT attorneys have written about some of these legislative developments here.

Related Articles

DWT logo
©1996-2026 Davis Wright Tremaine LLP. ALL RIGHTS RESERVED. Attorney Advertising. Not intended as legal advice. Prior results do not guarantee a similar outcome.
Media Kit Affiliations Legal notices
Privacy policy Employees DWT Collaborate EEO
SUBSCRIBE
©1996-2026 Davis Wright Tremaine LLP. ALL RIGHTS RESERVED. Attorney Advertising. Not intended as legal advice. Prior results do not guarantee a similar outcome.
Close
Close

CAUTION - Before you proceed, please note: By clicking "accept" you agree that our review of the information contained in your e-mail and any attachments will not create an attorney-client relationship, and will not prevent any lawyer in our firm from representing a party in any matter where that information is relevant, even if you submitted the information in good faith to retain us.