Continuing its push to develop a formal governance framework for artificial intelligence, the European Commission (EC) published draft ethical guidelines for AI in late December of 2018.  Developed by the EC’s so-called “High-Level Expert Group on Artificial Intelligence,” the draft proposes ethical guidelines for the development and use of AI systems and technology. The release of these guidelines stems directly from the EC’s AI Declaration in 2018, in which the EC committed to develop a coordinated AI governance framework by the end of 2018.

The EC is inviting comments and feedback on these ethical guidelines, which will be accepted until January 18, 2019. The EC expects to issue a final draft of the guidelines by the end of the first quarter of 2019, during the first annual meeting of the EU AI Alliance.

The guidelines introduce a new standard for so-called “trustworthy AI” –defined as AI that is developed with an “ethical purpose” (i.e., that which respects fundamental rights, applicable regulations, core principles and values) and which is technically robust and reliable. To achieve “trustworthy AI” the guidelines recommend that AI developers incorporate both key principles (abstract high-level norms that developers, businesses, users and regulators should follow) and values (more concrete guidance on how to uphold ethical principles) into their systems. The principles and values articulated in the guidelines are based upon fundamental rights articulated in EU Treaties and the Charter of Fundamental Rights.

Foundational Principles of Trustworthy AI

The EC guidelines articulate five key principles which define trustworthy AI systems, as follows:

  1. Beneficence (“do good”) – AI systems should be designed and developed to improve individual and collective wellbeing.
  2. Non maleficence (“do no harm”) – AI “should not harm” human beings (offering a very broad and vague definition of harm). More specifically, AI must avoid “discrimination, manipulation or negative profiling.”
  3. Preserve human agency – Humans interacting with AI systems must keep “full and effective self-determination over themselves.”
  4. Principles of justice (“be fair”) – Developers must ensure that individuals are free from bias, stigmatization and discrimination.
  5. Principle of explicability (“operate transparently”) – Calls for both technical transparency (systems that are “auditable, comprehensible and intelligible by human beings at varying levels of comprehension and expertise”) and “business model” transparency (“human beings are knowingly informed of the intention of developers and technology implementers of AI systems”).  Explicability is a precondition for achieving informed consent from individuals interacting with AI systems.

Required "Values" of Trustworthy AI

The EC asserts that these principles, in turn, define ethical values that must be incorporated into trustworthy AI, and reflect the “requirements” of trustworthy AI. The guidelines offered the following non-exhaustive list of values:

  1. Accountability – AI systems should include accountability “mechanisms” – which could range from monetary compensation (no-fault insurance) to fault-finding, to reconciliation without monetary compensation.
  2. Data governance – The draft guidance endorses data governance concepts without actually recommending specific practices, including: ensuring integrity of data sets; eliminating bias in data; maintaining proper records of source data; and, ensuring that biased data is not used against individuals supplying the data.
  3. Design for all – This principle requires that AI be designed in a manner that ensures all persons (regardless of background or demographic) may use or have access to the system; and incorporates established principles of disability access set forth in the UN Convention on Persons with Disabilities.
  4. Governance of AI autonomy (human oversight) – Here the guidelines posit that the greater degree of autonomy given to an AI system, the more extensive testing and governance is necessary.  Also, the guidelines suggest that human intervention and the opportunity to override AI system decisions or analytics is critical.
  5. Non-discrimination – To eliminate, or reduce, unintentional discrimination or bias AI systems should ensure data sets are “complete” and subject to proper data governance models.  In addition, because some AI can detect bias within their own predictions, such systems should be employed to reduce the very bias or discrimination they may enable.
  6. Respect for human autonomy – Systems intended to assist individuals “must provide explicit support to the user to promote her/his own preferences” and limit system intervention, in order to ensure that the wellbeing of the user (as defined by the user) is central to system functionality.
  7. Respect for privacy – The guidelines specifically require full compliance with the GDPR, as well as “other applicable regulations” addressing privacy and data security.
  8. Robustness – The EC working group posits that “trustworthiness requires that the accuracy of results can be confirmed and reproduced by independent evaluation” thereby asserting that all AI systems be “reproducible.”  This principle also requires that such systems “adequately cope with erroneous outcomes” while adhering to (an unstated) level of accuracy and resilience against attacks.
  9. Safety – To ensure safety of AI systems the guidelines assert that AI systems must include processes to clarify and assess potential risks associated with the use of AI products and services.
  10. Transparency – Here the guidelines assert that AI systems must be capable of describing, inspecting and reproducing the mechanisms through which AI systems make decisions, as well as the provenance of the data used to create the system.

"Critical Concerns" Raised by AI

Although the EC working group members could not reach consensus on this question, the guidelines also identified certain “critical concerns” raised by AI.

  1. Identification without consent – Asserting that AI enables the more efficient identification of individuals (through facial recognition technology and other applications using biometric data), the guidelines argue that AI developers have an ethical obligation to develop entirely new means for individuals to give “verified” consent to being automatically identified by AI.
  2. “Covert” AI systems – Here the guidelines assert that individuals have a right to know if they are interacting with a human or machine, and therefore argue that AI developers must ensure that humans are made aware of, or can request and validate, the fact that they are interacting with an AI system.
  3. Mass citizen scoring – This aspect of the guidelines focuses on so-called “normative” citizen scoring (e.g., assessments of “moral personality” or “ethical integrity”) and recommends that AI developers engaged in citizen scoring in a limited social domain provide individuals “a fully transparent procedure” that should include information on process, purpose and methodology of scoring and, ideally, the possibility to opt-out of any scoring mechanism.
  4. Lethal autonomous weapons systems – Citing fundamental ethical concerns raised by autonomous weapons systems, the guidelines state support for the EU Parliament’s resolution of September 2018 related to, among other things “ensur[ing] meaningful human control over the critical functions of weapon systems, including during deployment” of such systems.

These values are the essential governance concepts articulated in the guidelines and which could form the basis for the adoption of potential new standards, practices, rules or legislation.  Not surprisingly, the guidelines are offered at a high level without significant detail as to specific facts and circumstances.  In addition, the drafters make clear that this document is to be considered the start of a process towards the development of more formal guidance, rather than a formal decision or conclusion.

Although these guidelines propose the adoption of an ethical framework in some form or another, they do not appear to provide a basis for the Commission to adopt new rules or legislation in the near-term. Nonetheless, we expect this framework will garner support from parties advocating for more formal regulations governing AI systems and technology.