Google’s recent release of ethical principles and responsible practices reflects one more company’s recognition of how important it is to get out in front of regulators, potential litigants, and adversaries by adopting ethical AI design principles. Google intends these principles to help shape the “ideal Google AI algorithm” – one which is socially beneficial, unbiased, tested for safety, accountable, private, and scientifically rigorous. Although Google’s version was released as it was retreating (under employee pressure) from using its image recognition capabilities to enhance military drone strike targeting, these principles of self-regulation nonetheless serve as a useful starting point for exploring which principles should find their way into any set of AI ethical standards.

Google’s Decision Not to Wait for the Development of Consensus-Based Standards Reflects Immediacy of Issues Under Consideration

Google’s decision to publish its own set of ethical principles guiding the company’s use of AI illustrates the import of such principles to frame permissible uses and deflect over-reaching suggestions urging the adoption of some form of regulatory oversight.

These principles are emerging at a time when other leading technology companies are also defining their own principles and standards. Indeed, as previously discussed in this blog, Microsoft, another leading voice at the forefront of the AI policy and ethics discussion, recently published The Future Computed: Artificial Intelligence and its role in society, outlining its own vision of a self-regulatory framework to govern AI.  Indeed, the initiative shown by Microsoft, Google and others is prescient, especially as public interest groups and others are increasingly raising concerns and calls for regulation of AI.

While certain organizations are leading the way, other organizations developing or utilizing AI systems may ask whether these principles are necessary or useful. In other words, do ethical norms or principles actually work? That question was answered by Jack Clark, from OpenAI, who recently testified before Congress on this very question. Mr. Clark explained:

Why do we think developing ethical norms will make a big difference? Because we know that it works. In the areas of bias, we have seen similar research in recent years in which people have highlighted how today’s existing AI systems can exhibit biases, and the surfacing of these biases has usually led to substantive tweaks by the operators of the technology, as well as stimulating a valuable research discipline that provides a set of ‘checks and balances’ on AI development without the need for hard laws.

Testimony of Jack Clark, OpenAI, House Committee on Oversight (Apr. 18, 2018) As Clark’s testimony makes clear, the use and adoption of ethical norms and principles effectively establishes a framework for companies to develop AI systems, and applies a system of check and balances for engineers, developers, and executives to evaluate such systems. Further, these kinds of norms and principles also signal to policymakers and adjudicators that the company developing, deploying or using AI has already defined and established permissible uses consistent with societal norms and values. That, in turn, may act to limit (or block) potential new regulation or oversight in the years ahead.

Google Defines Standards of Conduct in Key Areas Including Fairness, Interpretability, Privacy and Security

Fairness: Google’s principle of “fairness” is grounded in existing legal regimes and recommended practices that guard against bias in discrete areas like housing, employment, and financial products. Google defines its objective as guarding against adverse outcomes for certain protected classes and avoid creating or re-enforcing unfair bias. It explains that the company will judge fairness within the context of circumstances at hand (rather than simply choosing to apply these principles broadly). It explains that identifying appropriate fairness criteria for a system requires accounting for user experience, cultural, social, historical, political, legal and ethical considerations, presents certain trade-offs, but that the company’s researchers will:

  • design models using concrete goals for fairness and inclusion
  • use representative data sets to train and test your model
  • check system for unfair biases, and
  • analyze performance

Interpretability: As used in Google’s principles, interpretability means that persons must be able to question, understand, and trust AI systems. Interpretability is not only the outgrowth of GDPR requirements for explainable AI, but also a sound design principle that can engender trust by users and potential regulators. In Google’s view, interpretability also reflects the importance of domain knowledge and societal values, and is a key tool to provide feedback to engineers and developers. To enhance this principle Google recommends that developers:

  • plan, in advance, options to pursue interpretability
  • treat interpretability as a core part of the user experience
  • design the model to be interpretable
  • choose metrics to reflect the end-goal and the end-task
  • understand the trained model
  • communicate explanations to model users
  • test, test, test

These recommendations reinforce an overall Google commitment to Be accountable to people. The company promises to provide appropriate opportunities for feedback, explanations, and appeal, and that its AI will be subject to “appropriate” human direction and control.

Privacy: Here again, a wide array of existing laws already govern privacy for health, financial, and video records, and a host of other kinds of personal identifiable information that may routinely be implicated in AI development and operations. Thus, while Google’s statement reaffirms that it is essential to consider the potential privacy implications of using sensitive data while considering social norms and expectations, in some ways the principle reflects a subset of duties that largely apply today under existing law.

But it is significant that Google is committing to privacy by design – to address privacy at the design stage, rather than after AI systems are fully operational. It is also significant that it offers a high-level principle that can be adjusted for context and with the evolution of technology and societal expectation and can therefor protect both privacy and rapid innovation. To achieve those goals the company recommends that developers:

  • collect and handle data reasonably
  • leverage on-device processing where appropriate
  • appropriately safeguard the privacy of Machine Learning models

Security: Google’s approach to security reflects well-accepted truths: that it is essential to consider and address the security of an AI system before it is widely relied upon in safety-critical (and other) applications. To that end, the company recommends that developers:

  • identify potential threats to the system
  • develop an approach to combat threats
  • keep learning to stay ahead of the curve

Safety: Google pledges that it will design its AI systems to be “appropriately cautious, and seek to develop those systems in accord with best practices in AI safety research.” (Several who study the ethics of AI and robotics recommend that such systems be designed with safety protocols and safe modes.)

Socially Beneficial: Implementation of these practices will also be guided by several high-level scientific and social objectives that Google will use in assessing AI applications. On the science side, it commits to high standards of scientific excellence. On the social side, Google will balance a “broad range” of social and economic factors and develop or use AI where the overall likely social benefits “substantially exceed” foreseeable risks and downsides. It plans to use or make AI tools available for uses that accord with its principles, considering the technology’s primary use, whether it can also it can be used for harmful purposes, the potential impact at scale, and the nature of Google’s involvement—whether it is using AI with its own customers, offering AI as a platform, or building custom solutions.

Google’s AI Practices and Objectives Exclude Some Noteworthy Issues

As has been widely reported, in conjunction with the release of these principles Google announced that the company will not develop AI applications and systems specifically for weaponry, or for surveillance tools that would violate “internationally accepted norms.” It also announced that the company won’t “design or deploy AI” for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Left unaddressed are the harder questions of how the company will distinguish between those applications that may indirectly “cause” or “facilitate” harm, and those that don’t.

But Google has also omitted a number of principles that some are advancing as appropriate for AI or for AI platforms. For example, IEEE is grappling with respect for human rights and benefits to the natural environment as two large organizing principles, and with specific issues like how to embed values, when it is ethical to nudge humans towards certain behavior, and how to deal with hate, fear, love, and other emotions. In today’s post-Cambridge Analytics climate, one might also ask whether an AI platform should be designed to provide accountability for political messaging.

Watch this blog as Davis Wright Tremaine’s AI lawyers explore these questions in the days ahead.