LitLand is a monthly feature that reviews developments in litigation as they relate to privacy matters and highlight any past, current, and future cases about which you should know.

Another day, another lawsuit against Google. This time, lawsuits in Illinois and California accuse the tech giant of violating consumers’ privacy expectations by secretly recording, storing, and listening to conversations through Google Assistant. We previously covered a class action against Google, the University of Chicago, and the University of Chicago Medical Center in which the plaintiffs alleged the defendants violated patients’ expectations of privacy by sharing personal health information with each other, without obtaining consent.

The Technology

Google Assistant is a voice-activated virtual assistant. The tool, powered by artificial intelligence (AI), can among other capabilities, run searches on the internet, schedule appointments, play music on demand, and on some devices, use the camera to identify objects. The technology is available on Google devices, such as Nest or Google Home, Android devices, smart speakers, and iOS devices.

Depending on the device, a consumer can activate Google Assistant by saying “Ok Google” (a hot or wake word), holding a specific button, or launching an app. Once activated, users can interact with the technology by typing in a keyboard or by speaking in their “natural voice.” As with all AI, Google Assistant relies on data to “learn” how to perform requested tasks.

To improve functionality, Google Assistant records snippets of conversations which it then analyzes to see if an individual intended to trigger Google Assistant, or to train the device to better distinguish and understand an individual’s speech pattern. The ability to distinguish between voices is particularly important when multiple people use one Google Assistant enabled device.

The Illinois and California Cases

In the Illinois case, Morales v. Google.com Inc., the three plaintiffs allege Google violated the Illinois Biometric Information Privacy Act (BIPA) by recording, “transmit[ting],” and “indefinitely stor[ing]” communications, “including ambient speaking in the background not even meant for Google Assistant,” after a wake word was uttered in their household.

Plaintiffs allege this data is a type of “biometric identifier” and Google therefore must provide consumers written notice before collection, explaining the intent to collect this data, the purpose of collection, and the length of retention, and must also obtain consumers’ authorization. The lawsuit alleges Google neither informed the plaintiffs or bystanders in the vicinity of the devices that their voices were being recorded nor obtained their consent.

The Illinois suit is part of the flurry of lawsuits we have seen since the state Supreme Court found that plaintiffs exercising the private right of action under BIPA do not have to show actual harm or injury.

Similarly, the California case, Galvan v. Google, Inc., alleges Google secretly recorded and retained the plaintiffs’ conversations. Unlike the Illinois case, however, the plaintiffs in Galvan allege that the recordings happened without a prompt (i.e., without a wake word or a button being pressed) and were listened to by humans when the data was transmitted to Google servers.

The California plaintiffs reference a Belgian news report describing how the reporter discovered Google’s surreptitious activities, obtained copies of the recordings, and tracked down at least one couple from the recording because of the amount of detail that was contained in the leaked audio. The California plaintiffs allege a violation of the California Invasion of Privacy Act, the state’s unfair competition act, and other California laws.

Both suits also allege Google unlawfully recorded children, who cannot give lawful consent. Specifically, the plaintiffs allege the unlawful recordings occurred either because the technology is incapable of distinguishing between adults’ and children’s voices, or because children were in the vicinity and their voices were recorded when the device was interacting with an adult.

Both suits seek damages and attorneys’ fees.

Takeaways

These cases are interesting primarily for the questions that they raise.

First, AI technology relies on data, in this case voice recordings, to improve performance (indeed, to perform at all). This invites the question of whether a consumer implicitly consents by purchasing a device that necessarily relies on recording your voice, or whether Google must now implement processes whereby it can obtain explicit consent for recording, separate from the intent the consumer exhibits by purchasing the device. 

  • If explicit consent is required, what will that consent look like for household smart devices? Can one adult resident provide consent for an entire household, including visitors to that home? Are we on the verge of seeing privacy consent doctrine expanded to beyond privity between an individual and the data collector, or will there be a way to fit emerging technology into old privacy principles?

Second, if consent exists only when an individual uses a hot word to trigger the device, what happens when an individual accidentally utters the wake word (or a word that sounds like it) triggering a smart device? If consent requires intent to trigger the smart device, who should bear the responsibility of ensuring the user intends to trigger the device? Google, for having the device do exactly what it was purchased to do—that is, listen, record, and respond when activated? Or the people in the household, for failing to be precise in the words that they use?

  • In the Illinois case, the plaintiffs admit the device was intentionally triggered, whereas in the California case the plaintiffs assert that, following the Belgian investigation, they were aware that Google released a statement indicating that sometimes Google Assistant can be triggered by a “false accept”—i.e., when noise or a background word sounds like a wake word. 

Finally, the cases raise questions about the scope of permissible activity with AI. Specifically, what does consent look like when reviewing AI functions?

  • Can a company obtain authorization just by stating that it will collect data and use various mechanisms for analysis (including human review), or does the company have to obtain consent for each possible way it might use data and come back to consumers over and over again each time it finds a new way to analyze the collected data? Or are some types of analysis more intrusive— and in need of enhanced notice and consent—than others? 

Google has yet to file answers in either of these cases.