As our clients have recognized, the future will bring both tremendous opportunities and complex challenges in the development and application of artificial intelligence (“AI”) technologies. How can we—as in-house and outside counsel for companies embracing these opportunities and addressing the challenges—be ready to help our clients manage the risks and rewards? What questions should in-house and outside counsel be asking? What answers should we be questioning?

Recent blog posts on this site have begun to explore some of the ethical and legal issues arising from the use of AI in financial services, healthcare services and autonomous systems. At the same time, fundamental questions of ethical standards for the design, use, and implementation of AI-powered systems remain unanswered.

Staying abreast of the commercialization of existing and developing AI technologies and how they impact the legal landscape is an obvious starting place. Those technologies implicate a broad range of legal doctrine arising in areas such as IP, tort, privacy and employment legal issues, among many others. But, beyond the analysis of specific legal issues, it is critical for the legal profession to develop an in-depth understanding of the evolving and complicated social and legal policies surrounding AI in order to provide insightful and well-informed guidance to our clients.

The Partnership on AI to benefit people and society (the “Partnership”) was established in 2016 by leading technology companies Amazon, Apple, DeepMind/Google, Facebook, IBM, and Microsoft to advance public understanding of AI and formulate best practices on the challenges and opportunities within the field. Since then, over three dozen other companies, non-profits, NGOs, and academics have joined the Partnership’s multi-stakeholder dialogue on AI.  Any legal professional currently working on AI issues, or hoping to do so in the in the future, will benefit from following the thought leadership of this group at the following links:

Website:  https://www.partnershiponai.org/#Twitter: @PartnershipAI

Facebook: Partnership on AI

Microsoft, another leading voice at the forefront of the AI policy and ethics discussion, recently published a book titled: The Future Computed: Artificial Intelligence and its role in society (Microsoft Corporation 2018). In its Foreword, company leaders Brad Smith1 and Harry Shum2 compare the monumental changes brought to society from the internet coming of age to those that will occur as AI evolves over the next two decades. They pose the question “will the future give birth to a new legal field called ‘AI law’?” and answer it with a resounding yes. “By 2038 . . . [n]ot only will there be AI lawyers practicing AI law, but these lawyers, and virtually all others, will rely on AI itself to asset them with their practices.”

As an example of the new thinking AI requires, the University of Washington created the Tech Policy Lab, an interdisciplinary research unit that spans the School of Law, Information School, and Paul G. Allen School of Computer Science and Engineering. It sits at the epicenter of law, technology and policy. Rather than watch new technologies emerge and then ascertain the legal and social implications, the idea is for policy makers and lawyers to engage earlier in the technology design process to surface important issues.

More than ever, we will be faced with the question: “Just because we can, should we?” At the same time a range of organizations, from federal government agencies to industry consortia, are also exploring many of these issues.

NIST, the federal agency responsible for setting certain technical standards is exploring the development of AI data standards and best practices to cultivate trust in technology by developing and deploying standards, tests and metrics that make technology more secure, usable, interoperable and reliable. In addition, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems has recently released a draft Report on Ethically Aligned Design which seeks to promote diverse stakeholder input; human rights; prioritizing well-being of humanity; accountability; and transparency. Similarly, the International Organization for Standardization is also working on a similar standards process.

Professor Ryan Calo, of the University of Washington School of Law and a faculty co-director at the Tech Policy Lab, speaks of three paradoxes that presently exist in the realm of AI, which lawyers should be mindful of when exploring the ethical issues presented by widespread adoption of AI-powered systems:

  • First, we are being advised not to worry about the “AI apocalypse” because it is still early days. Yet, we are entrusting some of our most sensitive life and death situations such as healthcare diagnoses, sentencing of criminals, and end of life decisions to AI-guided tools. That trust should be verified to ensure that bias does not infect the outputs and that decision-making has a moral overlay.
  • Second, to get smarter, we need to know less. Machines will help us make better decisions. But how that might be done can either be intentionally opaque (because of intellectual property protections) or not knowable (because algorithms evolve without human intervention). AI can help surface patterns, validate hunches, and flag scenarios for further review, but these new systems should not replace human judgment and healthy skepticism. In other words, continued utilization of concepts like “shared autonomy” in robotics and autonomous systems, as explained here by Professor Sethu Vijayakumar, of the University of Edinburgh, will be necessary.
  • Third, we hear that AI will change everything from the practice of law to medicine. At the same time, some believe that we should be able to apply existing legal principals developed in the past to AI and its many applications in the future. Yet, AI will stretch fundamental legal concepts like intent, authorship, and foreseeability beyond reason. Lawyers should start now in shaping the field of “AI law.”

AI tools can provide a valuable supplement to human activities. AI can generate efficiencies that extend resources such as legal services, healthcare, and transportation to more people. We should embrace it. At the same time, lawyers need to actively engage with our clients in answering the difficult questions raised by impact of AI on existing legal norms and principles, and society as a whole. From privacy to cybersecurity, to responsibility and ethics, to liability, the legal profession has a crucial role to play in developing social policies and legal standards for AI that will serve both our clients’ interests and the betterment of the global community in which we live and work.

FOOTNOTES

1 Microsoft President and Chief Legal Officer
2 Executive Vice President, Artificial Intelligence and Research