The Supreme Court ruled in two long-awaited cases on May 18, handing twin victories to online services. These cases—Twitter v. Taamneh and Google v. Gonzalez—called into question the applicability and scope of Section 230 of the Communications Decency Act, 42 U.S.C. § 230, the federal statute that shields online publishers and platforms from liability for hosting user-generated content. But the Court's decisions left the status quo intact. For now.
These consequential rulings arose from terrorist attacks conducted by the Islamic State of Iraq and Syria (ISIS). In 2017, an ISIS terrorist killed Nawras Alassaf and 38 others in a nightclub in Istanbul. In Taamneh, Alassaf's family sued Facebook, Twitter, and Google for aiding and abetting terrorism under 18 U.S.C. § 2333. The Ninth Circuit held that plaintiffs had stated a claim for relief under that statute, but did not consider whether Section 230 barred such a claim.
Gonzalez concerned a November 2015 terrorist attack in Paris that killed U.S. citizen Nohemi Gonzalez. Ms. Gonzalez's father and estate sued Google, alleging that YouTube (a Google subsidiary) materially contributed to his daughter's death by publishing and neglecting to timely remove ISIS recruitment videos. Unlike in Taamneh, the Ninth Circuit considered a Section 230 defense and held that it barred the plaintiffs' claim.
In Taamneh, the Supreme Court ruled that a plaintiff cannot state a claim for aiding and abetting terrorism under 18 U.S.C. § 2333(d)(2) on the basis of social media companies' typical daily activities: "creating their platforms and setting up their algorithms to display content relevant to user inputs and user history." To reach this conclusion, the Court adopted and applied the three-part test from Halberstam v. Welch, 705 F.2d 472 (D.C. Cir. 1983), under which a defendant is civilly liable for aiding and abetting where: (1) there is a wrongful act causing injury, (2) the defendant is "generally aware of his role as part of an overall illegal or tortious activity at the time he provides the assistance" to the principal wrongdoer, and (3) the defendant "knowingly and substantially assist[s] the principal violation." Id. at 47.
Stressing this standard is meant "to impose liability on those who consciously and culpably participated in the tort at issue," the Court found the plaintiffs' allegations insufficient. It was not enough that ISIS was able to use these platforms "just like everyone else," that the platforms recommended the content "just like any other content," or that the platforms knew the content was available but did not do more to remove it. To permit aiding and abetting liability on this basis, the Court reasoned, "would effectively hold any sort of communication provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them." That, it found, would sweep too broadly. "[I]t might be that bad actors like ISIS are able to use [social media] platforms . . . for illegal . . . ends. But the same could be said of cell phones, email, or the internet generally." This sort of "passive nonfeasance" was not enough given the attenuated relationship between defendants and the terrorist attack.
In Gonzalez—widely anticipated because the questions presented squarely concerned application of Section 230—the Supreme Court remanded the case to the Ninth Circuit for consideration in light of the Court's ruling in Taamneh. Although it passed no judgment on the Ninth Circuit's application of Section 230, it strongly suggested that no version of the plaintiffs' claims would survive under Taamneh in any case.
Effects on Publication of Online Content
By declining to rule on the scope of Section 230, the Supreme Court preserved the legal status quo. Considered by many as "the twenty-six words that created the internet," Section 230 shields online publishers and platforms from liability for hosting user-generated content. Section 230 advocates feared an adverse ruling in Gonzalez or Taamneh could have had devastating consequences. Specifically, a holding that imposed liability based on defendants' algorithms—including those used to decide what content to recommend to users—might have created a clear path around Section 230, forcing platforms to engage in much more content moderation.
The Court's decisions not only preserved the Section 230 defense, but provided lower courts another avenue to dismiss aiding and abetting claims based on the publication of third-party content at the pleading stage. This is underscored by the Court's instruction that the Halberstam standard "should be understood in light of the common law," which means its decision should reach common law claims under other state and federal statutes that threaten internet platforms with similar forms of secondary abettor liability.
Questions remain, however, about how courts will treat the publication of content through algorithms. In Taamneh, the Court suggested that such algorithms are "passive" and "agnostic as to the nature of the content." Though that characterization supported the defense in that case, it could provide a basis to narrow the Section 230 immunity—contrary to the statute's intended purpose—to protect platforms only when they provide a passive conduit to distribute third-party speech and refrain from curation. It could also be used to argue that platforms that use algorithms exercise no editorial control over user content, stripping them of First Amendment protections to decide what content to publish, promote, block, or restrict. The Court may revisit these issues next term, if it agrees to hear challenges to Texas and Florida statutes that also regulate the publication of user content.
Ambika Kumar co-chairs the media law practice at Davis Wright Tremaine LLP, where Jim Rosenfeld is a partner, Adam Sieff is a counsel, and Shontee Pant is an associate. Ambika and Adam submitted an amicus brief in Gonzalez in support of Google.