EU's Proposed AI Liability Directive Sets Procedural Rules for Litigating Potential AI Harms
On September 28, 2022, the European Commission published its Proposal for an Artificial Intelligence Liability Directive. See European Commission, Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence, COM (2022) 496 (the "Directive" or "AILD"). The Directive only addresses non-contractual fault-based civil claims for damages arising out of incidents involving AI systems, in other words, tort-based claims. The Commission found that legal uncertainty surrounding such potential liability was a barrier to adoption of AI by European companies. The objective of the AILD is both to "ensur[e] victims of damage caused by AI obtain equivalent protection to victims of damage caused by products in general … [and] reduce[ ]legal uncertainty of businesses developing or using AI …" (AILD p. 2 "Explanatory Memorandum").
Purpose, Scope, and Key Provisions
The Directive would only address challenges that civil liability claimants may face when proving damages related to incidents involving "AI systems," as defined in Article 3(1) of the pending EU AI Act (currently under consideration in Europe). The Directive would not apply to criminal liability, liability under European Union (EU) law regulating transport, liability for injuries resulting from defective products, or liability under provisions set out in the Digital Services Act.
Claimants under the AILD are defined as (i) persons bringing a claim for damages who are either the injured persons themselves, or (ii) others that "have succeeded in or have been subrogated into the injured persons' rights" (e.g., an insurance company or estate), who are acting on behalf of the injured persons (AILD p. 12 "Explanatory Memorandum").
Key provisions include rules for preservation and disclosure of evidence in cases involving high-risk AI systems that threaten fundamental rights, health, and safety of natural persons (as defined in the EU AI Act) (discussed here), as well as the burden of proof and rebuttable presumptions for claims arising out of incidents involving AI, including AI systems not classified as high-risk.
Disclosure and Preservation of Information Related to Potential Claims
Potential claimants of damages caused by high-risk AI systems may request information from providers of high-risk AI systems to identify potentially liable persons and relevant evidence for a claim. The AILD would authorize courts to direct "providers" (as defined in Article 3(2) of the EU AI Act) to disclose or preserve "necessary and proportionate" information about those systems. If providers refuse claimants' initial request for information, claimants may bring the matter before the court to assess whether the action is justified and necessary to sustain claims for damages arising from incidents involving AI. Courts must consider the interests of the parties, including trade secrets. However, if providers refuse to comply with a disclosure order, the AILD lowers the burden of proof by providing for a rebuttable presumption of causality (discussed below) and non-compliance with the relevant duty of care.
Courts may also order the provider to preserve relevant evidence as long as deemed necessary. Notably, the EU AI Act also mandates retention of documentation for 10 years after a high-risk AI system has been placed on the market, which includes information that may be requested by claimants (e.g., data used to develop the AI system, technical documentation, logs, data related to the quality management system, and any corrective actions).
Shifting the Burden of Proof through Rebuttable Presumptions
The AILD would proscribe a rebuttable presumption of the causal link between the defendant's fault and the output produced by the AI system (or failure to produce an output) if each of the following three conditions is met:
- Fault of the AI system provider has been demonstrated by the claimant or presumed by the court, with the caveat that claimants would need to show providers of high-risk AI systems failed to comply with those obligations (e.g., requirements relating to training and testing of data sets, system accuracy, robustness);
- Based on the circumstances, it is reasonably likely that the system provider's fault has influenced the output produced by the AI system or the failure of the AI system to produce an output;
- The claimant has demonstrated that the output produced by the system (or failure to produce an output) gave rise to the damage alleged.
For AI systems that are not classified as high-risk under the EU AI Act, the presumption of causality applies if there is demonstrated non-compliance with rules intended to prevent the damage and the defendant is responsible for the non-compliance.
For high-risk AI systems, the EU AI Act requires risk management measures, training data set criteria, and other accuracy, robustness, and cybersecurity criteria. Where the provider of a high-risk AI system fails to meet these requirements set out in the EU AI Act, the AILD provides that the presumption of causal connection applies against the provider. For users of high-risk AI systems, the presumption applies if the users did not "interfere[e] materially with the conditions of operations" (AILD para. 29) (e.g., failure to comply with instructions provided or otherwise exposed the system to data that was not related to the system's intended purpose). Also, the presumption of causality could apply to AI systems that are not high-risk "because there could be excessive difficulties of proof for the claimant." (AILD para. 28).
Next Steps
For the AILD to become law, it must be considered by the European Parliament and European Council. If adopted, the AILD proposes to apply to damages that occur two or more years after it becomes effective to allow for "adaptations to national civil liability and procedural rules to foster the rolling-out of AI-enabled products and services under beneficial internal market conditions, societal acceptance and consumer trust in AI technology and the justice system." (AILD para. 32). Within five years of the AILD becoming law, the European Commission must establish a monitoring program for incidents involving AI systems to determine whether additional measures are needed.
Member States will have two years after the AILD goes into effect to pass necessary laws, regulations, and administrative provisions for AILD compliance. Member States may adopt national rules that are more favorable to claimants, so long as the rules are compatible with the AILD.
The AILD is meant to complement the separate but related Proposed Directive to revise the current Product Liability Directive, which provides no-fault liability for defective products, as well as liability provisions for platforms under the proposed Digital Services Act. Collectively, these proposals reflect important changes to the EU civil liability system. Because the AILD is so closely aligned with the EU AI Act, particularly as to the classification of high-risk AI systems, any amendments to the EU AI Act will likely need to be reflected in the AILD (as well as the Proposed Directive).
Our AI Team at Davis Wright Tremaine will continue to monitor these proposals and related developments, including legal implications for our clients developing and using AI and machine learning systems.