The world of the 1987 movie Robocop largely remains science fiction – except, perhaps the central character’s ability to access massive amounts of confidential personal data through the strike of a keyboard or even his helmet (was Robo’s mask the precursor to Google Glass?). Today, electronically sharing sensitive personal information – from credit card and Social Security numbers to medical information and family history – has become so commonplace that many of us rarely give it a second thought. Yet the widespread use of confidential information has created a tension between states that wish to protect their citizens from identity theft and those companies looking to use that information in inventive ways to meet evolving industry demands and consumer expectations for customized experiences. On April 17, Kentucky Governor Steve Beshear signed H.B. 232, a data breach notification law requiring companies to alert the state’s residents of unauthorized access to their personal information. Kentucky is now the 47th state in the Union to enact a data breach notice law – Alabama, New Mexico and South Dakota remain the only states not to have such a statute on their books. A copy of the statute can be found here, and a summary can be found at DWT’s online compilation of all similar statutes. H.B. 232 was signed into law the same day as H.B. 5, a separate bill similarly mandating that state agencies notify Kentucky citizens when their personal information held by the state is compromised. Both laws come at a time when privacy concerns over sensitive data are driving states to protect their citizens’ personal information from unauthorized access.

Like many state data breach laws, H.B. 232 requires companies to notify affected persons “without unreasonable delay” when “unauthorized access to their personal information results in actual identity theft or fraud, or if the company reasonably believes that the breach ‘has caused or will cause identity theft or fraud.’” Companies must also alert major credit reporting agencies if there is a breach that affects 1,000 or more Kentucky residents. “Personally identifiable information” under the statute includes “an individual’s first name or last name in combination with any of the following: Social Security number; driver’s license number; or account number, credit card number, in combination with any required security code, access code or password that would permit access to an individual’s financial account.”

H.B. 232 goes beyond typical data protection legislation, however, and contains a separate provision that restricts the ways in which “student data” stored on cloud systems can be used. H.B. 232 prohibits cloud providers from processing stored student data (defined to include not only information identifying the student, but also “any documents, photos, or unique identifiers relating to the student”) without parental permission for “any purpose other than providing, improving, developing, or maintaining the integrity of its cloud computing services.”  The law thus restricts the activities of not only cloud services developed specifically for academic use, but also broadly available free services like Google Docs that are used by many schools. Kentucky’s strong restriction against sharing student information without direct parental permission stands in contrast to the FTC’s recent guidance in its updated COPPA FAQs that outline permissible uses and allow schools to consent to the disclosure of children’s personal information on behalf of parents.

Simultaneously, privacy concerns are putting economic pressure on firms that are looking to use such sensitive information to meet new data sharing demands. As the New York Times reported on April 21, inBloom, “a non-profit corporation offering to warehouse and manage student data for public school districts” nationwide via a cloud storage system, announced it is shutting down. The non-profit’s collection and storage of student data – from family relationships to Social Security numbers – was meant to help teachers across the country track individual student progress; instead it raised concern amongst parents, and led some school districts and states to withdraw from their relationships with inBloom and similar providers.

In announcing inBloom’s closing, chief executive Iwan Streichenberger stated that “[i]t is a shame that the progress of this important innovation has been stalled because of generalized public concerns about data misuse, even though inBloom has world-class security and privacy protections that have raised the bar for school districts and the industry as a whole.”

While some innovation in academic progress may be stunted because of inBloom’s demise, the generalized privacy concerns over data misuse that led to its downfall remain very real. Just one day after inBloom’s closure was announced, Iowa State University disclosed that Social Security numbers of close to 30,000 individuals who were enrolled between 1995 and 2012 have been compromised. While officials do not think that the hack’s purpose was to obtain personal information, that may be little comfort to current and former students whose SSNs were accessed.

Then, just yesterday, Google announced that it was permanently disabling ads in Apps for Education services and removing all ads scanning in Gmail for Apps for Education. Like inBloom, Google’s Apps for Education are designed to enhance students’ quality of education – in Google’s case, by providing products like email, word processing, calendaring, spreadsheet and document sharing to K-12 and higher education institutions for free. Yet Google’s decision comes as it was facing a backlash for automatically scanning and indexing emails of Apps for Education users to deliver targeted advertisements to its users, as well as legal pressure from lawsuits alleging the company’s practice of scanning user emails violated anti-wiretapping laws.  Given the diversity of Google’s services and its brand recognition, it will likely weather the criticism it has faced over user privacy better than inBloom.

All of these developments show that consumers and governments are struggling with how much personal data – and in what contexts – companies should be allowed to collect and use, even if such use is ostensibly for the benefit of consumers.