NotPetya was the scary virus highlight of 2017, one that caused billions of dollars in damage and lost revenue. A cursory overview of the battlefield suggests that the fallout was unavoidable. “Sophisticated” nation state actors created a “sophisticated” self-propagating and misleading virus with an indiscriminate attack route, one that eventually blasted through corporate America. But what if that impression is as misleading as the virus itself?

In 2017, 93% of all breaches could easily have been avoided if organizations had instituted basic security practices. NotPetya exploited vulnerabilities that were exposed, and for which patches were made available, months prior to the attack. In fact, weeks before the NotPetya outbreak, WannaCry ransomware exploited the same vulnerability. And even though these vulnerabilities were widely publicized, at least a handful of multi-billion dollar companies failed to address them.

While basic security could have saved many of these companies, basic security is not sufficient under the law. Most legal regimes require “reasonable security” or a close approximation, a standard higher than basic security. Lawmakers have used both carrots (e.g., Ohio’s safe harbor legislation) and sticks (e.g., FTC consent decrees) to drive companies to implement “reasonable security” practices.

But what is “reasonable security”? Lawyers have been asking that question for almost two decades, if not longer. The FTC has been happy to point out what it believes “unreasonable security” is but less eager to specify what constitutes “reasonable security.” Likewise, California’s Attorney General has provided some guidance but only to address the bare minimum for any company that collects personal data. Even this guidance is couched in the negative, stating that a failure to adopt the applicable controls “constitutes a lack of reasonable security.”

The FTC has left companies muddling through FTC consent decrees and guidance documents (including pamphlets) to figure out what requirements are included in “reasonable security.” This issue will take on new urgency when the California Consumer Privacy Act (CCPA) takes effect next year.

CCPA’s False “Reasonable Security” Promise

The CCPA by its terms provides a defense for companies that employ “reasonable security,” but the ambiguous meaning of that term means that the CCPA will likely penalize any company with a public data breach regardless of the quality of its security.

Plaintiffs have struggled to prove damages when consumer data is lost. The CCPA attempts to solve this problem by providing statutory damages in the event of “a violation of the duty to implement and maintain reasonable security procedures and practices.” For each plaintiff, the CCPA provides for damages of between $100 and $750. At first glance, that doesn’t seem like a lot, especially compared to the cost of implementing and maintaining an information security program, but the CCPA’s potential penalties for some recent breaches show that the cost of employing “unreasonable” security could skyrocket. The potential fines that the CCPA could impose for those breaches reach into the billions. With such high dollar amounts at issue, the CCPA provides details on what qualifies as reasonable security, right? Unfortunately, it does not. It cannot. There are too many different circumstances for one statute or regulation to address, and technology moves too quickly for rule makers even if there were a one-size-fits all solution. 

If lawmakers can’t define reasonable security, then courts are even less well positioned to do so. The 11th Circuit struggled with the vague standard of reasonableness even when the FTC provided the Court with a specific framework for analysis. Most defendants will likely be reluctant to agree to detailed discovery that could allow a court or jury to weigh in on the question. Under the CCPA’s statutory damages provision, companies will likely settle if they can’t defeat class certification or win on a motion to dismiss. Courts will not be able to determine if a security program is “reasonable” based on motion papers alone. And it is doubtful any company will want its security program to be scrutinized during discovery.

The CCPA does provide a way to avoid suit even in the absence of reasonable security: a business must certify that it has actually cured its lack of reasonable security within 30 days of a plaintiff’s notice. But even if a company can address its security deficiencies within 30 days, there are two problems with this defense. First, using this defense requires an admission that a business lacked reasonable security to begin with. That admission could find its way into a lawsuit in another venue. Second, plaintiffs will likely challenge any certification and demand proof that the violation has actually been cured. This second scenario is not contemplated by the statute, and so we will have to wait to see how it affects the availability of this defense. If a business can’t certify that it has cured the violation, either because the violation can’t be cured in 30 days or because it doesn’t have an actual violation to cure, it will have to either settle or continue through litigation.

So what is an organization supposed to do? If a company lacks basic security, it will become a statistic—and maybe a former company. Establishing basic security comes first. Rulemakers won’t define what “reasonable security” is, so companies should build a defensible security program. They can accomplish this while establishing basic security by taking a few additional steps. With a defensible security program, companies at least have a fighting chance against both threat actors and regulators.

A defensible security program has the following components:

  • An accountable individual
  • Risk assessment
  • Adequate controls within an overarching framework
  • Maintenance

We’ll address each of these in a future series of blog posts.