What a difference a year makes.

In early 2018, the biggest concern for the consumer-facing online ecosystem was probably the then-impending launch of the GDPR. Today, GDPR compliance seems like the least of our worries. California’s “Consumer Privacy Act” takes effect in 10 months, and other states are considering similar laws. Illinois’ Supreme Court just ruled that plaintiffs can recover damages without showing actual harm under the state’s biometric identification law. And, after two decades of support for privacy self-regulation, tech giants – notably Apple, via CEO Tim Cook, but others as well – are calling for binding, nationwide federal privacy legislation.

It’s hard to make sense of this radical shift in the policy landscape from within the conventional perspective on how consumers deal with privacy issues. Under the conventional notice-and-choice model, consumers have preferences for privacy that they express by deciding which websites and online services to patronize and what information to give up. By coming online and using the services, consumers convey assent to providing information, to viewing targeted ads, and – ultimately – to buying things. And, within this framework, the steady volume of online activity means that consumers are reasonably satisfied with the level of privacy protections the online ecosystem provides – otherwise, why would they keep showing up?

So, where did the sudden shift in attitude – and, as a result, in political climate – come from?

A famous economist once said that “economics is the science of thinking in terms of models, joined to the art of choosing models which are relevant.” The notice-and-choice model of privacy, which conforms to the idea of supply (of privacy, by online entities) and demand (for privacy, by consumers) has a lot to recommend it. But it doesn’t seem to capture today’s world of privacy policy challenges. It doesn’t help us understand the changing political climate, and it doesn’t provide much insight into what industry and policymakers should do to address consumer concerns in a way that will let the ecosystem flourish going forward.

If we are going to successfully navigate the challenges now facing the online ecosystem, we need to supplement the traditional model with something more in tune with what’s going on.

We can see a different way to understand our current situation by considering how producers manage a “common-pool resource,” colloquially called a “commons.” In the natural dynamic of a market, supply and demand interact to create a balance between the interests of buyers and sellers. But in the natural dynamic of a commons, everyone uses as much of the shared resource as they can, causing it to get over-used and to collapse – to everyone’s detriment. This is the “tragedy of the commons” we’ve all heard about.  The challenge in a commons is to find ways for producers to organize to make profitable use of the shared resource, while managing overall usage in a way that keeps it from collapsing.

In the context of privacy policy, the critical shared resource is consumer trust. In human terms, we share information with those we trust, and conceal things, to the extent that we can, from those we don’t. Privacy and trust are thus joined at the hip. People trust the online ecosystem (and feel their privacy is protected) when it safeguards the data it gleans from consumers and uses it in ways consumers find acceptable (including, it should be noted, a reasonable level of ad targeting). But the ecosystem squanders trust with data breaches, data collection that consumers find intrusive or excessive, or, generally, being less than transparent about what information is being collected and what is being done with it.

The problem for the online ecosystem is that the critical resource – consumer trust – really is shared.  No matter how hard any one entity tries to do things right, sooner or later somebody suffers another big data breach, or consumers or regulators learn that somebody is collecting and using data in unexpected and troubling ways. This leads to yet another mini-scandal in the world of privacy policy – and the resulting decline in trust doesn’t just hurt whichever entity, this time, may have gone beyond the pale. It hurts everybody.

To make matters worse, a shared resource can have a non-linear response to increases in usage. In a linear system, a little more usage just means that there’s a little bit less of the resource to go around. But in a non-linear system, a little more usage may lead to small declines for a while, but at some point, just a little more usage causes a large, sudden, and catastrophic decline. Non-linear response is the proverbial straw that breaks the camel’s back, or the last bit of rain that causes the dam to burst.

This seems to be what’s happening in the world of privacy policy. From data breaches to concerns over undisclosed data collection, consumer trust has been subject to repeated hits, leading the dam to burst – and, suddenly, the largest state in the nation has a draconian new privacy law, with other states competing to keep up. Consumers – who are also citizens to whom politicians respond – seem to have lost trust in the ability of the online ecosystem to protect privacy via self-regulation, resulting in demands for legislation.

So, what does this mean in practical terms?

Viewing privacy through the lens of a “trust commons” doesn’t dictate the best answer for any given issue, whether for consumers or businesses. But it does provide a framework for discussion that can help get beyond claims that consumers are served by burdensome new privacy options, or even greater access to, and rights to delete or correct, stored information. No matter how popular these measures might sometimes be, they may not really scratch where consumers are itching. Instead, as we sort out the parameters of new privacy laws – both federal and state – we should consider how to structure them using a framework that corresponds to what’s actually going on. Viewing the key issue as managing a common pool of consumer trust – rather than as trying to enable a chimerical consumer-choice-driven “market” for privacy – may fit the bill.


1 The ideas discussed here are addressed in more detail in my recent law review article: Managing the Ambient Trust Commons: The Economics of Online Consumer Information Privacy, 22 Stan. Tech. L. Rev. 95 (2019), available online at: https://law.stanford.edu/stanford-technology-law-review-stlr/. The views expressed here (and there) are the author’s own, and do not necessarily reflect those of any of Davis Wright Tremaine’s clients, or of any other members of the firm.