The European Commission's (Commission) proposal for regulations establishing harmonized rules on artificial intelligence systems and applications (AI Regulations) reflects aggressive and sweeping proposals to regulate artificial intelligence (AI) systems, applications, and tools that are designed to "turn Europe into the global hub for trustworthy AI."

While not explicitly stated in the AI Regulations, the proposed obligations will likely impact many of the specific methods of developing AI systems, such as the use of machine learning, deep learning, neural networks, and adversarial systems. This proposal builds upon the Commission's previous AI White Paper outlining a regulatory framework for AI, which introduced the risk-based framework and similar concepts that are now reflected in the proposed regulations.

There is no doubt that this proposal will frame future policy debates around the globe. Indeed, that seemed to be the Commission's intent, as reflected in this comment from Margrethe Vestager, Executive Vice-President for A Europe Fit for the Digital Age:

On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.

While framed as EU-centric regulations, the potential reach of these proposed rules is quite broad. To that end, the proposal itself puts forward a significant extra-territoriality component which the Commission justified in the following manner: "This Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union" in order to "to prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union."

With the introduction of these proposed new rules of the road, the European Union will begin a complex and possibly lengthy legislative process to consider adoption of these regulations. Both the European Parliament and European Council are expected to review and provide input, which could lead to significant modifications to the proposal and result in final rules that may differ from the proposal.

Key Takeaways

Although the AI Regulations utilize a risk-based approach in an attempt to moderate the potential impact on low-risk systems, the proposed rules sweep across a broad range of AI systems and applications and include prohibitions of certain systems/applications, and include detailed new requirements for other systems and applications deemed to be "high-risk." Even for those low-risk systems or applications that do not meet the high-risk threshold, many are subject to transparency obligations under the proposed AI Regulations.

Proposed Regulations Broadly Define AI and Related Concepts

The proposed AI Regulations frame the new rules around several key concepts, including both "AI systems" and "AI practices." AI systems are defined as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with."

Annex I of the AI Regulations specifically identifies machine learning, logic and knowledge-based approaches, and statistical methodologies as within the scope of this definition. Notably, the AI Regulations routinely use the term AI "practices" without formally defining the term. Numerous key concepts, standards, and expectations are similarly left undefined.

Generally, the AI Regulations follow a risk-based approach, differentiating between uses of AI that create "(i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk." Broadly speaking, AI systems that potentially pose significant risks to the health and safety or fundamental rights of persons are considered "high-risk." Specific classification points focus on human-machine interaction and the level of vulnerability of persons impacted by AI systems.

High-risk AI systems face the greatest scrutiny and potential new requirements, ranging from obligations to develop risk management methods, transparency tools, and event logs intended to mitigate potential harms to individuals or users. In addition, providers of high-risk AI systems must implement processes to ensure human oversight of these systems and are subject to heightened requirements governing data use and governance, accuracy, robustness, and cybersecurity.

Perhaps most significantly, developers of high-risk AI systems will be subject to conformity assessment requirements which impose an ex ante, pre-market entry, regulatory review of the specific high-risk AI application or system.

Prohibition on Certain AI Systems and Applications

The AI Regulations propose to ban or prohibit the deployment of certain very specific AI applications or systems. Specifically, the AI Regulations ban certain AI systems and applications deemed to be "particularly harmful AI practices" and "contravening Union values" that present unacceptable risks and are therefore prohibited.

The prohibitions focus on practices that from the Commission's perspective have a significant potential to "manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm."

There are four categories of prohibited AI practices:

  • 1. AI systems that deploy "subliminal techniques beyond a person's consciousness in order to materially distort" behavior that causes or is likely to cause a person physical or psychological harm.
  • 2. AI systems that "exploit" vulnerabilities of a specific group due to age or physical or mental disability in order to materially distort behavior that may harm a person.
  • 3. AI systems used by public authorities for the evaluation or classification of trustworthiness of persons based on social behavior and resulting social score leads to detrimental or unfavorable treatment that is either or both: (a) in an unrelated social context to the context generating data, or (b) unjustified or disproportionate to the behavior.
  • 4. "Real-time remote biometric identification systems" in publicly accessible spaces used for law enforcement. Note this prohibition includes a significant exception when such systems are used to: (a) search for victims of crimes, including missing children; (b) prevent specific, substantial, and imminent threat to life or physical safety of persons (or of a terrorist attack); and (c) detect, identify, or prosecute persons of a criminal offense.

High-Risk AI Systems Subject to Greatest Scrutiny

As noted above, AI systems that potentially pose significant risks to the health and safety or fundamental rights of persons are considered high-risk. In practice, there are two main categories of high-risk AI systems:

  • 1. AI systems intended to be used as safety components of products already subject to pre-market (ex ante) conformity assessment; and
  • 2. Stand-alone AI systems used in applications or sectors that present heightened risks, including: biometric identification (distinct from the prohibition on use by law enforcement in publicly accessible spaces), critical infrastructure management, education and training, human resources and access to employment, law enforcement administration of justice and democratic processes, migration and border control management, and systems for determining access to public benefits.

Expansive New Requirements for High-Risk AI Systems

Systems falling into these categories are subject to a range of new obligations, including pre-market entry conformity assessments, extensive risk management requirements, data use standards, and detailed recordkeeping and reporting obligations.

Type

Required Action

Risk management system Establish, implement, document, and maintain a "risk management system" for high-risk AI systems, pursuant to mandated testing to identify the most appropriate risk management measures. Risk management systems must include the specific system components designed to identify and mitigate perceived risks.
Data governance For techniques involving training of models with data, develop on the basis of training, validation, and testing data sets that meet quality criteria for training, validation, and testing data sets.
Technical documentation Develop technical documentation before an AI system is placed on the market or put into service, which must "demonstrate that the high-risk AI system complies with the requirements" of high-risk AI systems and provide authorities "with all the necessary information to assess the compliance of the AI system with such requirements."
Recordkeeping Design and develop the AI system with capabilities enabling automatic recording of events (these would be "event logs") while the AI system is in use that are sufficient to log key events, enable a certain level of "traceability," and monitor risks.
Transparency Design and develop AI systems in such a way to ensure operation is "sufficiently transparent" so users can interpret system's output and use appropriately. High-risk AI systems must also be accompanied by instructions for use in an appropriate digital (or similar) format.
Human oversight The AI system at issue must be designed and developed, including with "appropriate human-machine interface tools," to enable human oversight while in use. Before the AI system is released on the market or put into service, such measures must be either (a) built into the high-risk AI system, or (b) identified as appropriate for implementation by the user.
Accuracy, robustness, and cybersecurity Design and develop AI systems in such a way that they achieve an "appropriate level" of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout the system's lifecycle to meet certain specific parameters around accuracy, resilience feedback, and cybersecurity.
 

Registration and Database for High-Risk AI Systems

The Commission will monitor post-market entry compliance with the above referenced requirements for high-risk AI systems, in part, through the creation of a registration and public database for high-risk AI systems maintained by the Commission. Thus, providers of high-risk AI systems will be required to register their systems and provide meaningful information about such systems after completion of the conformity assessment process, but prior to market entry.

AI Systems Also Face Heightened Transparency Obligations

The proposed AI Regulations also extend transparency obligations to most other AI systems, even those that are not classified as high-risk. Specifically, transparency obligations apply regardless of risk level to systems that: (i) interact with humans, (ii) detect emotion or use biometric data for social purposes, and (iii) generate or manipulate content ("deep fake"). Transparency obligations vary, depending upon the type of system deployed.

Type

Transparency Obligations

Interacting with humans Under this framework, providers of any system that interacts with humans must ensure AI systems intended to interact with natural persons are designed and developed to inform natural persons that they are interacting with an AI system, unless it is obvious from the circumstances and context.
Emotion recognition or biometric categorization Further, users of an "emotion recognition system" or a "biometric categorization system" must inform natural persons exposed to either that such system is operating.
Deep fakes Finally, users of an AI system that "generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful ('deep fake'), shall disclose that the content has been artificially generated or manipulated."
 

AI Systems Will Be Subject to Oversight From Multiple Authorities and Potential Fines

The AI Regulations propose a tiered framework for potential fines and penalties, depending upon the activity at issue. Introduction or use of prohibited AI systems (and related development, testing, and data use) could result in fines of 6 percent of the provider's worldwide annual revenue or €30 million (whichever is higher).

Violations of other rules under this framework could result in fines of 4 percent of the provider's worldwide annual revenue or €20 million (whichever is higher). Finally, providing incorrect, incomplete, or misleading information to certifying bodies or national authorities could result in a fine of 2 percent of the provider's worldwide annual revenue or €10 million (whichever is higher).

AI Systems Will Face Significant Ongoing Regulatory Oversight

In addition to the many other requirements outlined above, providers of AI systems will be required to notify regulators in EU member states about "serious incidents" and "malfunctions" arising from such systems and/or any recalls or withdrawals of such systems. The EU member state regulator will have authority to investigate the incident, collect necessary information, and report information to the European Commission, which will be used to analyze overall market compliance.

The Commission's proposal would also authorize and empower a new AI enforcement agency in Europe: the European Artificial Intelligence Board. Representatives of member countries and the Commission would sit on the AI Board, which would oversee implementation of the proposal throughout EU member countries and likely consider new standards or requirements as technology continues to develop.

Regulatory "Sandboxes" Intended to Incentivize Innovation but Raise Questions

The AI Regulations include provisions concerning the creation of a "controlled environment" for developing, testing, and validating AI systems for a time before they are put on the market or into service. Competent authorities will supervise these "sandbox" activities to ensure compliance with the regulation and other applicable laws. If AI systems in the sandbox involve the processing of personal data or otherwise involve a supervised or regulated activity related to data, the applicable data protection or other authorities will be "associated to" the sandbox operation.

While the Commission's intent is to incentivize innovation, operations within the scope of the "sandbox" are not without risk. Indeed, the AI Regulations provide that any significant risks to health, safety, and fundamental rights identified during an AI system's development and testing will be immediately mitigated.

If not mitigated, development and testing will be suspended until mitigation occurs. Sandbox participants are liable for any harm inflicted on third parties due to experiments in the sandbox.

Commission Accepting Comment on Proposed AI Regulations

This proposal begins a complex process that involves consultation, review, and further collaboration between the Commission, the European Parliament, and member states. The process of reviewing and considering these proposed regulations is expected to be a long one and could play out over several years (as was the case with the GDPR). The Commission is accepting initial public comments on the proposal through June 22, 2021.