On June 14, 2023, the European Parliament approved amendments to the European Union (EU) Artificial Intelligence Act (AI Act or AIA) ("Parliament Proposal"). The AIA is the EU's primary proposed framework to regulate AI through a risk-based system that imposes duties on developers and deployers of AI systems, part of broader EU efforts to regulate AI.[1] This measure reflects a third proposed version of the AIA – the Parliament Proposal, the EU Council general approach, and the original European Commission proposal – and a final version of the law will now be negotiated by representatives of the three branches of the European Union: the European Parliament, the European Commission, and the Council of the European Union. Reports are that a final agreement is expected before the end of the year.

The Parliament Proposal modifies the original proposal in several significant ways, including by modifying the definition of an "AI system," expanding the reach of the AIA to cover foundation models, and expanding the scope of high-risk AI systems and prohibited uses.

Definition of AI System Modified in Favor of More Precise Language

The Parliament Proposal refines the definition of AI to more closely align with the definition used by the OECD in an attempt to more closely align the AIA with emerging international standards. The new proposal defines an "AI system" as "a machine-based system … designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs … that influence physical or virtual environments" (Art. 3(1)). This definition focuses on machine-learning capabilities and moves away from concepts of automated decision-making as a key test. Unlike the European Commission proposal which defines an AI system as "software … that can generate outputs …,"[2] the Parliament Proposal's language distinguishes AI systems "from simpler software systems or programming approaches." (Recital 6 to Art. 3) Similarly, the EU Council version clarifies that AI systems do not include all software.[3]

Parliament Proposal Expands Scope and Reach of the AIA

The Parliament Proposal modifies the scope of the AIA in several significant ways: by explicitly new obligations on foundation models (those large language models powering generative AI tools), expanding and refining the test for high-risk AI systems, and expanding the list of prohibited uses.

  1. Extending AIA Duties to Foundation Models

    The Parliament Proposal reflects a noteworthy decision to target a specific form of machine learning for new regulatory oversight: foundation models that enable generative AI applications like ChatGPT. The proposed amendments would impose new obligations on the development and use of foundation models, including when such models are used to deploy generative AI tools. (Arts. 28, 63(1), Recitals 60e to 60h)

    A "foundation model" is defined as "an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks" (Art. 3(1c)). Notably, these models are trained to accomplish a wide range of downstream tasks, including some tasks for which the models were not specifically developed or trained "which means that each foundation model can be reused in countless downstream AI or general purpose AI systems."

    The Parliament Proposal imposes new obligations on providers of foundation models, many of which overlap with other requirements for other AI systems. For example, developers of foundation models will be required to register in the EU database for high-risk AI systems, establish a quality management system, and maintain technical documentation for 10 years (Art. 28b(2f), (2g), (3)). In particular, the significant new duties imposed on developers of foundation models include demonstrating risk mitigation steps have been taken, as well as drawing up extensive technical documentation and "intelligible instructions" (Art. 28b(2a), (2e)). The new duties also extend to processing "only datasets that are subject to appropriate data governance measures," and designing and developing the model to achieve "appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity," and energy efficiency – throughout its lifecycle (Art. 28b(2b) to (2d)).

    Additional requirements are imposed on providers of foundation models that are capable of generating complex text, images, audio or video (i.e., generative AI tools, like ChatGPT). Specifically, providers of foundation models must also comply with transparency obligations, train and (if applicable) design and develop the foundation model in such a way to ensure adequate safeguards against the generation of content that would violate EU law, including copyright laws. Such providers must also make publicly available a sufficiently detailed summary of the use of training data that is protected under copyright law (Art. 28b(4)).


  2. Refining and Expanding the Scope of High-Risk AI Systems

    Changes to the designation of high-risk systems reflect extensive debate in Parliament regarding what uses should be considered "high risk." While the Parliament Proposal clarifies the scope of high-risk systems by adding a "significant risk" layer to high-risk categorization, the Parliament Proposal expands the scope of "high-risk" by broadening enumerated use cases in Annex III.

    Refining Criteria for High-Risk AI Systems

    Under the original European Commission proposal, an AI system is categorized as "high risk" if it falls within an enumerated critical area or use listed in Annex III of the AIA. The Parliament Proposal modifies this categorization process by providing that AI systems listed in Annex III are high-risk if, in addition to being listed in Annex III, they also pose a "significant risk" of harm under the high-risk classification rules in Article 6(2). "Significant risk" is defined as "a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and its the [sic] ability to affect an individual, a plurality of persons or to affect a particular group of persons" (Art. 3). AI systems that are determined not to pose a significant risk of harm to the safety, health, or fundamental rights of persons, or in some cases not to pose a significant risk of harm to the environment, would not be automatically categorized as high-risk solely based on Annex III (Recital 32).

    Annex III List of High-Risk AI Systems Expanded

    There are several proposed additions to Annex III, Article 6(2), which lists critical areas and use cases that are considered to be high-risk AI systems. These changes expand the scope of high-risk AI systems to include a broad spectrum of commercial use cases, including:

    Further, to account for the "rapid pace of technological development, as well as the potential changes in the use of AI systems," the Parliament Proposal provides that the Commission will have the power to add, modify, or remove Annex III use cases for high-risk AI systems.


  3. Expanding Scope of Prohibited Uses
  • AI systems intended to be used to influence election outcome or voting behavior, with the exception of AI systems whose output natural persons are not exposed to (Annex III, point 8(aa)), "such as tools used to organise, optimise and structure political campaigns from an administrative and logistic point of view" (Recital 40a);
  • AI systems used in recommender systems (i.e., machine learning systems that use consumer data to predict, narrow, and suggest relevant content, e.g., movies to watch, text to read, products to buy) by social media platforms that have been designated as very large online platforms[4] (Annex III 8(ab));
  • AI systems used in providing internet services, among the list of essential private services and public services and benefits (Annex III 5(a)); and
  • AI systems used in relation to emergency calls and triage systems, among the list of dispatch-related services (Annex III 5(c)).

The Parliament Proposal substantially expands the list of prohibited AI systems that present an unacceptable level of risk, including the following:

AI systems used for biometric categorization of individuals (e.g., facial recognition)

With certain exceptions, the Parliament Proposal prohibits "the placing on the market, putting into service or use of" biometric systems that categorize natural persons based on "sensitive or protected attributes or characteristics or based on inference of those attributes or characteristics" (Art. 5(1ba)). This includes assigning people to specific categories,[5] or inferring their characteristics on the basis of biometric data, or biometric-based data, from the technical processing of a person's physical, physiological, or behavioral signals, or which can be inferred from such data (Art. 3(35)). Excepted from this prohibition are systems enabling one-to-one verification of a natural person for the sole purpose of accessing that person's device or premises, or for cybersecurity and personal data protection,[6] or AI systems intended for therapeutic purposes on the basis of informed consent.

AI systems intended for indiscriminate and untargeted scraping of biometric data from the internet to create or expand facial recognition databases.

The Parliament Proposal prohibits AI systems from indiscriminate scraping of biometric data from the internet or social media (or CCTV footage) to create facial recognition databases (Art. 5(1dc), Recital 26b to Annex III). Although a related ban on predictive policing applies to law enforcement, the Parliament Proposal does not expressly limit scraping by law enforcement or the public sector.

AI systems used to detect the emotional state of individuals

The Parliament Proposal includes prohibitions on certain commercial uses of real-time facial emotion recognition[7] in "workplace and education institutions" and does not expressly limit such uses to the public sector (Art. 5(1dc), Recital 26c to Annex III, Recital 24 to Art. 5). But high-risk AI systems that are not prohibited include those used in employment, management, and access to self-employment "notably for the recruitment and selection …for making [or materially influencing] decisions … on initiation, promotion and termination" and for personalized task allocation based on individual behavior, personal traits, or biometric data, monitoring or evaluation in work-related contractual relationships (Recital 36 to Annexes II and III).

AI systems that deploy subliminal techniques impacting individual or group decisions

The Parliament Proposal prohibits "the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting" individual or group behavior by "appreciably impairing" informed decision-making, "causing the person to take a decision they would not have taken otherwise" that causes or is likely to cause significant harm to the individual, group, or another person (Art. 5(1a)).

Exceptions[8] that are significant for commercial use include research "for legitimate purposes" that is not harmful to natural persons and "lawful commercial practices, for example in the field of advertising," that are otherwise compliant with EU law. (Recital 16 to Art. 5(1a))

AI systems that exploit vulnerabilities of an individual or specific group

The Parliament Proposal prohibits "the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons, including characteristics of such person's or group's known or predicted personality traits or social or economic situation, age, physical or mental ability with the objective or to the effect of materially distorting the behavior of that person or a person pertaining to that group in a manner that causes or is likely to cause that person or another person significant harm." (Art. 5(1b)).

The original EU Commission proposal prohibits AI systems considered to have an unacceptable level of risk to safety, including AI systems that deploy subliminal or purposefully manipulative techniques, exploit vulnerabilities, or are used to classify individuals based on their behavior, personal characteristics, or socio-economic status.

Other Significant Modifications to AIA

The Parliament Proposal also increases potential penalties for violating the AIA. The highest potential penalties (for breach of prohibited practices) moves from 6% of global annual revenue (or 30 million Euros), to 7% of global annual revenue (or 40 million Euros). Violations of data security and other obligations would also increase incrementally.

This proposal would also empower individuals to report breaches of the AIA and to seek a right to a judicial remedy from any supervisory authority.

Finally, individuals subject to a decision of a deployer based on the output of a high-risk AI system has the right to request a "clear and meaningful" explanation on the role of the AI in the decision-making process, the main parameters of the decision and data relied upon.

Next Steps

The EU Parliament's approval of its position on the AIA fires the starting gun for trilateral negotiations between the Parliament, Commission, and Council, who must agree on a final version of the AIA.

The AI team at Davis Wright Tremaine will continue to monitor the AIA negotiations and related developments, including its legal implications, and is prepared to assist our clients with AI compliance efforts.

 



[1] EU efforts to regulate AI include AI-specific instruments (AIA, AI Liability Directive), software regulation (Product Liability Directive), and platform regulation that also covers AI (Digital Services Act).

[2] European Commission proposal, Art. 3(1) (defining "AI system" as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with").

[3] EU Council general approach, Recital 1.1, Art. 3(1) ("the … text narrows down the definition [of AI system] in Article 3(1) to systems developed through machine learning approaches and logic- and knowledge-based approaches").

[4] Very large online platforms are those with more than 45 million users under the Digital Services Act, Art. 33.

[5] Prohibited biometric categorization includes assigning attributes like gender, sex, age, hair color, eye color, tattoos, ethnic or social origin, health, mental or physical ability, behavior or personality traits, language, religion, or membership of a national minority or sexual or political orientation. (Recital 7b to Art. 3)

[6] See Recital 33 to Annex III, Art. 3(33c).

[7] An emotion recognition system means an AI system used for the purpose of identifying or inferring emotions, thoughts, states of mind or intentions of individuals or groups on the basis of their biometric or biometric-based data (Art. 3(34)). Real-time use of AI systems means live or near-live footage generated by a camera or other device with similar functionality, and the capturing of biometric data, comparison and identification occur without a significant delay. (Recital 8 to Art. 3)

[8] One exception is included in the provision itself: "subliminal techniques" deployed by "AI systems intended to be used for approved therapeutical purposes on the basis of specific informed consent." (Art. 5(1a))