Some say Pablo Picasso coined the adage, "good artists copy and great artists steal." Whether or not Picasso was truly the originator of the phrase, it captures a tension underlying copyright law. When is a work merely inspired by another, and when is it theft? Reasonable people and courts can and do differ. Recent monumental leaps in the use and availability of artificial intelligence ("AI") generative products, which produce new content such as art, music and text, have added a contemporary gloss on this tension. Such products require the ingestion of large amounts of content in order to train machine learning systems used to create new works.

In a 2018 blog post, we discussed the factors that courts might consider in applying fair use analysis when weighing claims of copyright infringement by owners of creative content, when their content is ingested into large databases and used in the AI training process to produce new works. In that post, we addressed the major legal risks that are presented at two steps in the process: (i) when existing works of authorship are ingested into a database for training purposes, and (ii) when new works are created. We concluded that while, on the whole, the fair use doctrine provides a significant defense to claims of copyright infringement arising from the creation of new works by generative AI, future litigation and court decisions would surely turn on the facts of each case and likely test the bounds of fair use.

Since then, much has been written about the promise and perils of generative AI and the legal and policy considerations in balancing the interests of content owners against the potential for generative AI innovations that promote the progress of useful arts and science. And a recently filed class action suit by three artists (Andersen v. Stability) alleges that three such AI products that generate images go beyond Picasso's saying in that they both copy and steal copyrighted material. As noted below, while the lawsuit alleges liability based on a number of legal grounds, including publicity rights violations and unfair competition, this blog post will focus on the allegations of direct and vicarious copyright infringement.[1

Andersen v. Stability AI Ltd.

In a complaint filed on January 13, 2023, artists Sarah Andersen, Kelly McKernan, and Karla Ortiz – along with a proposed class of "at least thousands" of other creatives – allege that Stability AI (developer of an open-source program, Stable Diffusion), Midjourney (which incorporates Stable Diffusion into its Discord web-based applications) and DeviantArt (creators of an app based on Stable Diffusion's platform) use copyrighted images to train models for their AI image generation products without consent from or compensation to the underlying image rightsholders.

The plaintiffs claim that the defendants engaged in: (1) direct copyright infringement, by downloading and storing copyrighted works and using copies of the copyrighted works to train their AI image generation models without (a) obtaining consent from the copyright holders, (b) negotiating licenses for use of the copyrighted works, and (c) sharing revenue with the artists or underlying copyright holders; (2) vicarious copyright infringement, by enabling third parties to use the defendants' AI image generation products to create high-quality "fakes" (i.e., images that can pass as original works by a copyright holder); (3) DMCA violations, by removing copyrighted management information ("CMI") from the copyrighted works collected by the defendants and causing the AI image generation products to omit CMI from output images; (4) right of publicity violations, by appropriating the names of copyright holders to advertise, sell, and solicit purchases through the AI image generation products, which has the effect of diluting the copyright holders' art, name recognition, and distinctive artistic styles in the marketplace; (5) unlawful competition, by infringing the lawful copyrights of rights' holders; and (6) creating and distributing works that infringe the copyright holders' rights.

However, at this early stage the ultimate outcome of the lawsuit is uncertain and may turn on how the Court views AI in general and Stable Diffusion in particular. The plaintiffs characterize AI image generators as "21st-century collage tools" that remix and reassemble the copyright works of millions of artists whose work was used as training data, which may not capture how such generators actually work. Industry watchers have countered that the complaint rests on common AI misconceptions, which may be further revealed during expert testimony at trial, if the action proceeds past an expected motion to dismiss. Technical challenges aside, the plaintiffs' arguments also face a litany of legal hurdles, which are further examined below.

Legal Hurdles  

Plaintiffs would need to overcome a number of hurdles to prevail: 

An action for copyright infringement requires a plaintiff to prove: (1) ownership of a valid copyright, and (2) actionable copying by the defendant of elements of the work that are original. In the Andersen case, Plaintiffs have not identified any particular works that were copied nor any infringing works that were created. Rather, they allege that Stability acquired copies of over five billion copyrighted images, "including Plaintiffs'" that were used as "Training Images" and generated new works that are derivative works of these Training Images. Defendants may argue that these allegations lack sufficient specificity to state a plausible claim.

Indeed, Plaintiffs acknowledge in the complaint that "none of the Stable Diffusion output images provided in responses to a particular Text Prompt is likely to be a close match for any specific images in the training data." Defendants may argue that Plaintiffs have failed sufficiently to allege substantial similarity or even that the "output" images used enough of Plaintiffs' works to constitute a derivative work.

Finally, if Plaintiffs can establish a prima facie case of substantial similarity, Defendants can be expected to argue that to the extent they copied Plaintiffs' underlying works, it was fair use. As noted in our prior post, this will turn on the details. How transformative will the Court deem their use to be? How much, quantitatively or qualitatively, was taken from any underlying works? How does Defendants’ distribution of new works impact the market for Plaintiffs' works? Discovery may be necessary to flesh out these issues.

Looking Ahead

This case (and those mentioned in footnote 1), together with the Andy Warhol Foundation for the Visual Arts Foundation v. Goldsmith case argued October 12, 2022, before the U.S. Supreme Court (addressing the question of what does it mean for a work of art to be transformative "fair use" under U.S. copyright law, docket 21-869) will require courts to balance the competing interests of content owners and AI innovators. Given the potential for significant changes in the legal landscape, the Davis Wright Tremaine AI team will continue to monitor developments for our clients in the AI industry.

 

[1] Although there are several current cases, such as Doe v. GitHub, No. 4:22-cv-06823 (N.D. Cal.), and Getty Images (US), Inc. v. Stability AI, Inc., No. 1:23-cv-00135 (D. Del.), addressing the issue of AI generative products, this post focuses solely on Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal.).