On February 5, 2021, three Democratic Senators released the SAFE TECH Act, which aims to require online service providers to address fraud, harassment, and the use of social media to organize extremist violence. If passed, the Act would dramatically change the landscape of online liability.

Background

It is no understatement to say the internet would not be what it is today without Section 230. The statute effectively eliminates most ordinary legal responsibilities assumed by traditional publishers with respect to content provided by users and other third parties.

Section 230 was in part a response to a 1995 trial court decision that found an internet service provider could be liable for the content of its subscribers' posts, as the publisher of the content.1 The court relied heavily on the fact that the provider advertised its practice of controlling content on its service and actively screened and edited material posted on its message boards.

Congress enacted Section 230 to remove the "grim choice" created by Stratton Oakmont: a provider that voluntarily filtered content would be responsible for all posts, while "providers that bur[ied] their heads in the sand and ignore[d] problematic posts would escape liability altogether."2

The immunity is widely credited as enabling the proliferation of online content and has been expansively interpreted by courts to bar the vast majority of claims based on user content. But in recent years, elected officials, courts, and others have raised concerns about how Section 230 operates in practice, with some focused on unlawful content that Section 230 permits providers to disseminate, such as harassment and hate speech.

The announced SAFE TECH Act attempts to address the first set of concerns.

Proposed Revisions to Section 230

The Act would significantly change Section 230 in three ways: (1) reduce the type of content protected; (2) make it more difficult and costly to prevail on Section 230 in court; and (3) allow requests to require providers to remove allegedly unlawful material.

Limiting the Scope of Protected Material

Although Section 230 has always contained exceptions—primarily for federal intellectual property, criminal, and federal privacy laws—those exceptions have not materially altered the way providers operate. For example, other statutes and common-law regimes protect providers from liability for infringing third-party content, and criminal and privacy law typically requires providers to manage their own behavior more than they require vetting of third-party content.

The SAFE TECH Act would change this.

First, the Act would exempt from any protection content for which the provider pays or is paid. In other words, websites could face liability for defamatory or misleading material in ads or in content for which the provider pays. This amendment would fundamentally change the current online advertising ecosystem, under which advertisers, not websites, bear the responsibility for their own content. Under the Act, websites would likely require liability insurance as a condition of hosting paid content.

Second, the Act would exempt a raft of other laws, including those relating to:

  • Civil rights;
  • Antitrust;
  • Stalking, harassment, and intimidation;
  • International human rights; and
  • Wrongful death.

Thus, for wide swaths of content, Section 230 immunity would no longer be available. And, likely, websites would again face a choice between not vetting any content, or risk becoming responsible for all content. Consequently, unlawful content could proliferate, and lawful content could be suppressed—undoing Section 230's progress in many respects and undermining its goals.

The Ease of Applying Section 230

Today courts frequently dismiss claims targeting third-party content at an early stage of the case, without requiring discovery. But the SAFE TECH Act would expressly forbid that approach and, instead, mandate a court treat Section 230 immunity as an affirmative defense—to be pled and proven by the provider—rather than a reason to dismiss a lawsuit at the outset.

This would increase the cost to providers of defending claims and enable plaintiffs to file questionable lawsuits, hoping to extract a settlement. The burdens of such an approach would fall disproportionately on small providers, who may not have the resources to fight prolonged court battles.

Removal of Problematic Material

Finally, the SAFE TECH Act would permit claims for injunctions against "material that is likely to cause irreparable harm." In other words, anytime someone believes that a posting causes them "irreparable harm," they can seek injunctive relief if a provider refuses to remove it.

Again, this would have serious consequences—to evade Section 230 immunity, an individual need only request injunctive relief, even if the underlying content is lawful.

First Amendment Limitations?

No matter what lawmakers' intent might be, the SAFE TECH Act would likely cause some providers to severely limit the amount of speech they host, including speech that is lawful; to take down third-party speech upon complaint; or to avoid publishing third-party content altogether.

In any event, the First Amendment might provide protection where Section 230 does not—as the First Amendment generally requires some level of knowledge to impose liability on the distributor of third-party speech.


FOOTNOTES

1 Stratton Oakmont Inc. v. Prodigy Servs. Co., 1995 WL 323710, at *6 (N.Y. Sup. Ct. May 24, 1995).
2 Fair Hous. Council v. Roommates.com LLC, 521 F.3d 1157, 1163 (9th Cir. 2008).


This article was originally featured as a technology, privacy, and security advisory on DWT.com on February 10, 2021. Our editors have chosen to feature this article here for its coinciding subject matter.