On February 2, 2024, representatives from the European Union (EU) member states formally approved the final text of the EU Artificial Intelligence Act (the Act), which will be subject to final legislative approval in the coming months. Although the EU has not yet formally adopted this latest version, it reflects the political agreement reached in early December 2023 between the European Commission, EU Council, and European Parliament, as well as additional technical review and revisions by member countries in recent weeks. It is widely expected that this text will be adopted with few, if any, further revisions.

The EU is thus poised to adopt comprehensive legislation regulating artificial intelligence systems developed and deployed in the EU, as well as AI systems developed outside of the EU that are deployed in the EU or affect people located in the EU.

Key Takeaways

  • The Act will apply new rules for AI "foundation models," subjecting providers of such models to new duties, distinct from the obligations applicable to all AI systems.
  • AI systems generally will be subject to risk-based regulation, with significant obligations imposed on AI systems deemed to present "high-risk" applications or use cases.
  • Certain AI systems' applications or use cases present "unacceptable risk" and will be prohibited with limited exceptions.
  • The Act will not apply to the use of AI systems for exclusively military, defense, or national security purposes by any public or private entity.
  • Enforcement of the Act will likely be complex and disaggregated across various authorities in Europe.
  • The Act will have broad extraterritorial reach and apply to providers, deployers, importers, and distributors with the potential for significant fines for non-compliance.

Although the revised text maintains the risk-based approach to regulating AI systems from earlier versions, there are some notable new elements in the final draft, all discussed below.

New Duties for Providers of General Purpose AI Systems

As widely anticipated, the Act adopts new provisions regulating the development and deployment of foundation models, which are large AI/ML models used to develop generative AI applications, and other robust systems. The Act refers to a foundation model as a general purpose AI (GPAI) model, which means:

[A]n AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.

Notably, the Act specifically excludes from this definition AI models used for research and development, and/or prototyping activities before release on the market.

GPAIs Regulated by Risk Categories

The Act distinguishes between GPAI models that present higher risks, and those that do not, with additional duties imposed on high-risk models, and bans the use of AI for certain listed prohibited practices that present unacceptable risks (see Prohibited AI Practices).

Duties of Providers of GPAIs Deemed to Present "Systemic Risks"

The Act classifies GPAI systems that present "systemic risks" to be "high risk" and subjects these systems to special duties. The systemic risk category reflects the concern that very capable and widely used GPAI models could cause serious accidents, be misused for cyberattacks, or harm individuals through bias, discrimination, and disinformation. A GPAI is deemed to present systemic risks if: (1) "it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks," or (2) is so designated by the European Commission (Commission) or other authority. High-powered GPAI systems that use computing power exceeding 10^25 floating point operations (FLOPS) to train the system are presumed to have high-impact capabilities and present systemic risks.

Providers meeting this computing power threshold must notify the Commission within two weeks of meeting this threshold, or sooner if it becomes known such a threshold will be met. These providers can, however, choose to present arguments to the Commission explaining why their GPAI does not present a systemic risk. If the Commission does designate a GPAI model as presenting systemic risk, it may nevertheless, upon request from the provider, choose to reassess that designation. The Act also requires the Commission to ensure that a list of GPAIs with systemic risks is published and updated, with the proviso that the published material not infringe IP rights or reveal confidential business information or trade secrets.

Providers deploying GPAI models with systemic risks must:

  • Perform standardized model evaluations;
  • Assess and mitigate potential systemic risks;
  • Track and report serious incidents; and
  • Ensure adequate cybersecurity protections.

Providers can demonstrate compliance with these obligations by reference to "codes of practice" (as defined in the Act). These codes are to be developed in an open process with industry and the AI Board, after which the AI Office will evaluate and approve them.

Duties of All Providers of GPAIs

The Act requires all providers of GPAI systems to take steps to enhance transparency, accountability, and compliance with the EU's copyright laws. These providers must:

  • Maintain "sufficiently detailed" public summaries of the content of data used to train the models (as defined by a template that the AI Office will publish);
  • Adopt a policy to adhere to EU copyright laws;
  • Prepare and maintain technical documentation of the model, including:
    • training and testing processes, and
    • the result of model evaluations, as prescribed by the Act; and
  • Provide certain model information to AI system providers who use the models.

Providers of GPAI models accessible to the public under a free and open source license, as defined specifically in the Act, are generally exempt from these requirements if their models do not present a systemic risk.

Risk-based Framework for Regulating All AI Systems

The Act maintains the original risk-based approach to regulating AI systems but slightly revises the framework. The Act now regulates a specific class of AI systems: large "foundation" models producing generative AI (see New Duties for Providers of General Purpose AI Systems above). Further, it defines "AI systems" in a manner that reflects the definition used by the OECD, requiring increased transparency regarding the use of high-risk AI systems, and includes provisions directly relating to AI systems that produce "deep fakes." However, much of the Act remains unchanged, in substance, from prior versions. For example, (1) the Act continues to classify AI systems based on an assessment of the scale and scope of the risks they pose; (2) some AI applications and use cases are deemed so dangerous that they are banned; (3) AI systems that are less dangerous but that still may pose significant, systemic risks are deemed "high-risk" and are subject to a panoply of regulatory obligations; and (4) AI systems that do not pose systemic risks are still regulated, but more lightly. AI systems that do not present "systemic risks" are nonetheless considered "high-risk" if they potentially pose significant risks to health, safety, or fundamental rights of persons.

In practice, there are two main categories of high-risk AI systems: (1) AI systems intended to be used as safety components of products already subject to pre-market (ex ante) conformity assessment; and (2) stand-alone AI systems used in applications or sectors that present heightened risks. These uses include:

  • Use of biometric identification other than for verifying someone's identity (such as to gain access to a restricted area or a bank account, which is distinct from the prohibition on use by law enforcement in publicly accessible spaces);
  • Biometric categorization or emotion recognition;
  • Critical infrastructure management;
  • Education and training;
  • Human resources and access to employment;
  • Law enforcement, administration of justice, and democratic processes;
  • Migration and border control management;
  • Systems for determining access to public benefits; and
  • Other AI systems specifically enumerated in Annex III of the Act.

An AI system will always be considered high-risk if it performs profiling of natural persons. Profiling is defined as any form of automated processing of personal data, such as using personal data to evaluate aspects of a person's performance at work (not including measuring an employee's pain or fatigue), economic situation, health, personal preferences, interests, reliability, behavior, location, or movements. Profiling, however, does not include the mere detection of readily apparent expressions, gestures, or movements, unless they are used for identifying or inferring emotions. One area of dispute resolved in this final draft relates to the use of post-remote biometric identification by law enforcement authorities; an AI system can be used for this purpose, but such use is subject to judicial or administrative authorization (prior or ex post within 48 hours), with usage linked to a criminal offense or proceeding, present or foreseeable criminal threat, or search for a missing person.

Systems falling into any of the above-referenced categories are subject to a range of new obligations, including pre-market entry conformity assessments, risk management requirements, data use standards, and detailed recordkeeping and reporting obligations, as summarized here:

High-Risk AI System Duties

Required Actions

Risk management system

Establish, implement, document, and maintain a "risk management system," pursuant to mandated testing to identify the most appropriate risk management measures. Risk management systems must include the specific system components designed to identify and mitigate perceived risks.

Data governance

For models trained, validated, or tested using data, the data sets used for these purposes must meet quality criteria specified in the Act.

Technical documentation

Before an AI system is placed on the market or put into service, develop technical documentation that "demonstrate[s] that the high-risk AI system complies with the requirements" for such systems, and provide authorities "with all the necessary information to assess the compliance of the AI system with such requirements."

Recordkeeping

Design and develop the AI system to be able to automatically record events (that is, create "event logs") while the system is in use that enable a certain level of "traceability," and monitor risks.

Transparency

Design and develop AI systems to ensure operation is "sufficiently transparent" so deployers can interpret a system's output and use it appropriately. High-risk AI systems must also be accompanied by instructions for use in an appropriate digital (or similar) format.

Human oversight

The AI system must be designed and developed, including with "appropriate human-machine interface tools," to enable human oversight while in use. Before the AI system is released on the market or put into service, such measures must be either: (a) built into the high-risk AI system; or (b) identified as appropriate for implementation by the user.

Accuracy, robustness, and cybersecurity

Design and develop AI systems so that they achieve an "appropriate level" of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout the AI system's lifecycle.

Notably, the revised version of the Act includes a significant potential exception in the definition of high-risk AI systems: When such systems do not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making, they will not be deemed to be high-risk. This exception will apply if the AI system is intended to:

  • perform only a narrow procedural task;
  • improve the result of a previously completed human activity;
  • detect decision-making patterns or deviations from prior decision-making patterns, and is not meant to replace or influence the previously completed human assessment without proper human review; or
  • perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III of the Act (and properly documented).

Transparency Obligations for High-Risk AI Systems

The final version of the Act appears to endorse a transparency framework first articulated by the 2019 High-Level Expert Group ("HLEG"), which characterized transparency as "closely linked with the principle of explicability and [as] encompass[ing] transparency of elements relevant to an AI system: the data, the system and the business models," and as including disclosures: "the AI system's capabilities and limitations should be communicated to…end-users in a manner appropriate to the use case at hand." The Act also references HLEG in the Recitals, stating the term "transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights."

Disclosures by Providers to Deployers

The Act specifically addresses providers' transparency obligations for entities that deploy high-risk AI systems. The systems "shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately." This is to be accomplished by the provider supplying "instructions for use" of the system and other information:

  • Required content of instructions. The instructions for use must include a variety of information ranging from the provider's contact details to information that will permit deployers "to interpret the system's output and use it appropriately."
  • Human oversight measures. High-risk AI systems must be designed and built so that human beings can oversee their operation and output. The provider of a high-risk AI system must provide information about these human oversight measures to those who deploy the systems.
  • Computational and hardware resources required. The provider of the high-risk AI system must disclose the system's expected lifetime and associated maintenance requirements, as well as the computational and hardware resources needed to "ensure the proper functioning of that AI system."

Deepfakes and Related Disclosure Obligations

Title IV of the Act consists of a single article requiring certain disclosures in connection with AI-generated output. These requirements apply to "AI systems intended to directly interact with natural persons"; such systems must be "designed and developed in such a way that … natural persons are informed that they are interacting with an AI system." There are two exceptions: (1) when it would be obvious to any reasonable user that they are interacting with an AI system; and (2) when the system is being used for law enforcement purposes (subject to appropriate safeguards). The key requirements are:

  • Synthetic content must be identified as such. Systems that generate "synthetic audio, image, video or text content, shall ensure the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."
  • Emotion-recognition and biometric categorization systems. Deployers of such systems "shall inform … the natural persons exposed" to the systems of the fact that the system is in operation and shall use the system in accordance with the requirements of the GDPR and other applicable data privacy obligations.
  • Specific rule for deepfakes. If the system generates deepfakes (image, audio, or video), the deployer of the system "shall disclose that the content has been artificially generated or manipulated."

These obligations are sufficiently important that the Act specifically identifies them as an appropriate subject for the development by the European Commission of "guidelines on the practical implementation of this Regulation."

Explainability

A recurring concern about the use of advanced AI systems is that it may not be easy for the system or its users to explain, after the fact, why the system took or recommended any particular action or, more generally, how and why it produced the output it produced. The Act addresses this concern in at least two ways. First, the instructions to deployers of high-risk AI systems noted above should, "where applicable[,] … provide information that is relevant to explain [the system's] output." Second, high-risk AI systems must have robust logging capabilities, in part to ensure an appropriate level of "traceability of the AI system's functioning."

Limited Risk

The Act also includes transparency obligations for AI systems presenting only limited risk, but they are lighter than the obligations on high-risk AI systems or GPAIs (e.g., an obligation to disclose the fact of AI-generated content so users can make informed decisions on further use).

Disclosures to the Government

Finally, while not necessarily considered to be a "transparency" obligation, it should be noted that the Act imposes many obligations on providers and deployers of high-risk AI systems to maintain a range of records and to disclose those records to the government upon request. In addition, providers of high-risk AI systems must provide information to be included in a database of high-risk systems.

Prohibited AI Practices

The Act bans outright certain AI practices deemed to impose an "unacceptable risk" listed in Article V of the Act, applicable to law enforcement and private entities using AI systems in areas such as biometrics and critical infrastructure. Law enforcement use of AI will be subject to a range of safeguards, including monitoring and oversight measures and limited reporting obligations at the EU level, for example where law enforcement uses real-time biometric identification in publicly accessible spaces.

The Act prohibits the "placing on the market or putting into service or use of an AI system" that:

  • Deploys subliminal techniques beyond a person's consciousness or uses purposefully manipulative or deceptive techniques;
  • Exploits a person's vulnerabilities due to age, disability, or a specific social or economic situation;
  • Categorizes persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation;
  • Uses "social scoring" metrics to evaluate social behavior or classify persons or groups leading to detrimental or unfavorable treatment in unrelated social contexts or in unjustified or disproportionate ways;
  • Makes risk assessments of natural persons to assess or predict their risk of committing a crime ("predictive policing") based solely on profiling or assessing personality traits and characteristics;
  • Creates or expands facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
  • Infers emotions in workplace and educational environments (except for medical or safety reasons, including therapeutic uses and physical states such as pain or fatigue); or
  • Performs real-time remote biometric identification in publicly accessible spaces, except for certain law enforcement purposes, such as to prevent trafficking and sexual exploitation, prevent terrorist attacks, locate abducted victims, or pursue activities related to a list of sixteen serious crimes such as terrorism, murder, kidnapping, rape, armed robbery, or sabotage (all listed in Annex IIa to the Act).

Note: The final text does not prohibit law enforcement use of AI systems that perform post-remote biometric identification, but deems those systems "high-risk," and requires documentation and annual reports to the relevant market surveillance and national data protection authorities.

Enforcement Structure

The Act presents a tiered approach to enforcement that centralizes AI oversight in new EU-level regulatory bodies but vests front-line responsibilities in national regulators that each member state will designate. The Act's centralized regulatory structure includes the creation of a coordinating AI Office to oversee the most advanced GPAI models, issue standards and testing practices, and enforce the Act's rules across the EU. Within the new enforcement structure, the Act also provides for enforcement mechanisms to achieve ongoing market surveillance, and to assess large financial penalties for non-compliance. Notably, the Act does not provide for individual civil redress or for individual damages, although deployers "should determine measures to be taken in case of the materialization of [identified high risks], including … complaint handling and redress procedures, as they could be instrumental in mitigating risks to fundamental rights in concrete use-cases."

EU-Level Enforcement

The Act creates several new EU regulatory bodies with specific roles in enforcing the Act.

Regulatory Body

Role and Composition

AI Office

The AI Office will oversee all provisions regarding GPAI models, including the most advanced models. Staffed with officials who are members of the European Commission, the AI Office will also have strong links to scientific community and the Scientific Panel (see below) whose work will support the AI Office.

AI Board

The AI Board will advise on implementation of the Act, coordinate between national regulators, and issue recommendations and opinions in a manner similar to the function of the European Data Protection Board (EDPB) in privacy matters under the GDPR. Also, the AI Board will be composed in a manner like the EDPB, with representatives of the national regulators and the European Commission.

Scientific Panel

The Scientific Panel (introduced into the Act at the end of negotiations) will be composed of experts in the field of AI. Its role will be to advise and support the AI Office, particularly with respect to assessments of systemic risk of GPAI models.

Advisory Forum

The Advisory Forum will be comprised of "a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia." Like the role of the Scientific Panel advising the AI Office, the Advisory Forum will advise and provide technical expertise to the AI Board and the Commission.

National-Level Enforcement

At the national level, the Act requires that each member state establish or designate national competent authorities with two key roles: "notifying authorities" and "market surveillance authorities," collectively referred to as "national competent authorities."

Regulatory Body

Role and Composition

Notifying Authorities – Conformity Assessments

The Act defines "notifying authority" as "the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring."

Market Surveillance Authorities

The Act defines "market surveillance authority" as the "national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020." The referenced regulation lays out the EU's requirements for product safety and consumer protection.

It remains to be seen which bodies in which member states will be designated to implement the Act within their borders. To date, only Spain has established a national agency with specific authority over AI systems.

Enforcement Mechanisms

The revised version of the Act provides for enforcement mechanisms through market surveillance and financial penalties but does not specifically provide for individual civil redress. The Act, however, provides potentially significant fines for violating its terms:

  • Placing a prohibited system on the market. The potential fine for violating Article 5 prohibitions for AI systems is the larger of 35 million euros or 7% of worldwide annual turnover. (This figure was increased at the very end of negotiations from 6.5%);
  • Violation of GPAI obligations or non-compliance with enforcement measures (such as requests for information). The potential fine for these violations is 15 million euros or 3% of worldwide annual turnover for most breaches relating to high-risk AI systems, GPAI and foundation models arising from obligations imposed on providers, importers, distributors, and deployers;
  • Providing incorrect, incomplete, or misleading information to regulators. The maximum fine for this violation is 7.5 million euros or 1% of worldwide annual turnover.

Finally, application of the Act's enforcement mechanisms depends on how the Act characterizes the actor subject to enforcement. As discussed in more detail below, the Act distinguishes between actors in the AI value chain and, thus, the obligations with which each actor must comply.

Other Issues

The Act is quite lengthy and contains a range of obligations and regulatory activities beyond those already noted. Some of the more noteworthy additional provisions are set out below:

Application to Actors in the AI Value Chain: Providers and Deployers

The Act recognizes that there are various actors involved throughout different stages of the AI value chain, and clarifies their respective duties and obligations, including where one actor takes on various roles—for example, as both a distributor and importer of an AI system. As a result, the Act requires more specific measures for GPAI providers, such as risk assessment and mitigation in design, testing, data governance, implementation, and compliance with cybersecurity, safety, and other performance standards. The degree of control over the AI model also affects the Act's distinction between deployers and providers. If deployers make substantial modifications to an AI model, they might assume the responsibilities of providers. It remains to be seen what the regulators will consider to be a "substantial" modification, but the Act suggests that "changes occurring to the algorithm and the performance of AI systems which continue to 'learn' after being placed on the market or put into service … should not constitute a substantial modification."

Specific requirements and obligations for various models, including high-risk and generative AI models, are intended to provide downstream providers with the information needed to comply with the Act. Parties that make their AI products commercially available through interfaces such as Application Programming Interfaces, AI workforce platforms, or that distribute their AI products under free and open source licenses (generally exempt from the Act), and take part in other various methods of model development, sale, and commercial supply of software, pre-training models, or network services (among others) should make their data available and cooperate with providers in sharing training and expertise. Parties should also explicitly disclose the level of control exercised by parties that supply a provider with components, tools, or data that is later included by the provider in an AI system. Providers of free and open source AI components are encouraged to document implementation practices, including model and data cards, but they are exempt from regulations pertaining to AI value chain requirements.

Encouraging Development of Codes of Conduct

The Commission, the new EU AI Office, and member states will encourage the development of voluntary codes of conduct that are based on clear objectives and performance indicators and involve appropriate stakeholders. These codes would include technical robustness and safety of AI systems, privacy and data governance, transparency, and human oversight. The codes of conduct should be developed considering the interests and needs of small-scale AI providers and start-ups. Providers of AI systems that are not considered high-risk should develop codes of conduct that assess, mitigate, and prevent negative consequences of the systems while encouraging the benefits of AI, in alignment with broader EU principles such as improving societal, environmental sustainability, and individual rights outcomes. The Commission shall evaluate the effectiveness and impact of codes of conduct within one year after the regulation takes effect and every two years thereafter.

Extraterritorial Reach of the Act

The Act will apply to AI system deployers who are located or have their principal place of business within the European Union. The Act also extends to AI systems deployed outside of the EU if the provider, importers, distributors, or authorized representatives of that system are located or have their operations within the EU. In addition, all existing EU laws on personal data protections, privacy, confidentiality of communications, consumer protection, and safety apply to the rights and obligations outlined in the Act. Because the Act could apply across many industries that may use some level of AI in their products or services, including autonomous vehicles and healthcare, the Act's provisions must be considered along with all other applicable EU harmonization laws, including sector-specific regulations. Lastly, nothing in the Act precludes member states from enacting additional laws to protect workers' rights with respect to employer use of AI systems.

Regulatory Sandboxes and Real-World Testing

The new text delineates how AI "regulatory sandboxes" can be established to test AI models prior to market deployment. "Regulatory sandbox" is defined in the Act as "a controlled environment established by a public authority that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan under regulatory supervision." The Act requires member states to establish at least one regulatory sandbox at the national level, with regional, local, or EU-level sandboxes as optional. Member states can collaborate to develop joint regulatory sandboxes, allowing authorities to provide guidance to AI system developers on compliance requirements with EU regulations and additional member-state laws on AI systems. Regulatory sandboxes will also permit authorities to supervise and identify risks and mitigation measures and their effectiveness.

The text also contemplates testing high-risk AI systems in real-world conditions, not limited to regulatory sandboxes. However, first, high-risk system providers would have to fulfill certain requirements prior to exiting the sandbox in order to be considered compliant with conformity assessment procedures established under the Act or market surveillance checks. Upon exit, real-world testing also will be subject to various regulatory safeguards, including approval by the market surveillance authority prior to testing in real-world conditions, and permitting market authorities to inspect testing conditions, limit test duration, and request information about test conditions, as well as imposing specific safeguards tailored to testing conditions for law enforcement, migration, asylum, and border control management.

Timing and Effective Dates

The law's new obligations will be phased in over time following a plenary vote provisionally scheduled for April and publication. Once the Act takes full effect, front-line enforcement will largely lie with EU member states. That said, several EU-level bodies will also be deeply involved in AI regulation – the AI Office, the AI Board, the Scientific Panel, and the Advisory Forum.

The default effective date is 24 months after the overall regulation takes effect, with exceptions. For example, the provisions governing foundation and GPAI models will take effect after 12 months, except that, for models already on the market, compliance will be required in 24 months after enactment. Another notable exception is that provisions for prohibited practices take effect six months after the Act is formally enacted.

+++

DWT will publish additional advisories as the near-final draft moves towards enactment and will highlight any significant changes that may arise.