COP27 has drawn to a close. But what does its legacy mean for business?
Much has changed in the year since COP26 in Glasgow. The war in Ukraine has struck Europe with an...Read more
The proposal for an Artificial Intelligence Liability Directive was put forward by the European Commission on 28 September 2022. The file is now subject to review and approval from the European Parliament and the Council of EU to carry over to the interinstitutional negotiations (trilogue). This policy brief will recap the background and main elements of the Commission’s proposal, the current status and next steps of the file in the Council and Parliament and an overall impact analysis.
Background to the issue
In February 2020, the Commission published its White Paper on Artificial Intelligence, which aims to promote the uptake of Artificial Intelligence (AI) and to address the risks associated with several AI uses. In the same month, in the Report on Artificial Intelligence Liability, the Commission identified the specific challenges posed by AI to existing liability rules. The European Parliament adopted a legislative own-initiative resolution on civil liability for AI in October 2020 and requested the Commission to propose legislation.
As part of this AI strategy, the Commission unveiled a proposal for a new Artificial Intelligence Act (AI Act) in April 2021 and one for an Artificial Intelligence Liability Directive (AI Liability Directive) in September 2022.
The first proposal is the first-ever horizontal legal framework on AI, which addresses the risks of AI. It aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI, depending of their risk level.
The scope of the second proposal is to complement the AI Act. On 28 September 2022, the Commission delivered on the objectives of the White Paper and the European Parliament’s request with the Proposal for an AI Liability Directive, which seeks to introduce a standardised redress process for harms caused by AI systems through complementing already existing fault-based liability regimes in the Member States. The Commission also aimed to establish broader protection for victims of AI-related damages, foster the AI sector by increasing guarantees and reducing legal uncertainty and address the risks through rules focusing on respecting fundamental rights and safety. Primarily, the Directive addresses damages caused by AI systems against individuals’ health or property and seeks to tackle discrimination resulting from AI-powered decisions, such as in employment or education.
Key elements of European Commission’s proposal
The Commission’s proposal for an Artificial Intelligence Liability Directive includes:
The defendant will be empowered to rebut this presumption by proving that its non-compliance did not cause the damage by demonstrating other more plausible explanations for the damage.
State of Play
As the Commission only published the proposal in September 2022, the procedure is at a very early stage. The AI Liability Directive is now subject to review and approval by the Council and the Parliament, but neither of the two institutions has started discussing it yet.
In the European Parliament, the rapporteur assigned to lead to work on the Directive is Axel Voss, a member of the European People’s Party (EPP, DE) who is also shadow rapporteur on the AI Act.
The European Parliament could suggest amendments to the Commission’s proposal, especially when it comes to the liability regime. When the Parliament called on the Commission in 2020 to adopt rules on AI, it advocated for a strict liability regime: for developers, providers and users of high-risk autonomous AI to be held legally responsible even for unintentional harm. However, the Commission’s approach has been more pragmatic and weaker than this strict liability regime, claiming that it will review whether a stricter regime is needed five years after the Directive enters into force.
In the Council of EU, on the other hand, discussions have not started yet, and little is known about the positions of Member States.
This legislative procedure within the Council of EU and European Parliament will most likely bring amendments to the original draft and rarely takes less than 18 months, often more. After this, the three bodies including the Commission will go into trilogue negotiations, presumably in January 2024, and will be close to a final agreement in mid-2024. The earliest the regulation could become applicable to private companies will be mid-2025.
Once enacted at the EU level, the Directive will need to be implemented nationally, and the transition period for member states will be two years from its entry into force. In their transposition, the member states may adopt national rules more advantageous for claimants only if they are compatible with EU law. As currently drafted, the new rules will only apply to harm that occurs after the transposition period without retroactive effect.
Grayling’s Impact Analysis
The Commission’s proposal of the AI Liability Directive will relieve claimants from the burden of proof as civil liability claims on AI-related damage will be legally assessed based on evidence disclosure, causal link and rebuttable presumptions,
On the other hand, a company might be ordered by a court to disclose elements of its high-risk AI systems even if they represent a trade secret due to Article 3 on the disclosure of evidence – for example, to prove that an algorithmic decision did or did not cause harm to a person. The claimants will also be able to request access to evidence for the presumed AI-related damage to the defendant. If businesses do not voluntarily provide access, the claimant may, upon a reasoned request, get a national court to order the disclosure of information. This is a significant development for European tech companies, which have had limited disclosure obligations, if any, in proceedings up until now.
If the business in question still refuses to disclose the relevant information, the national court may presume its non-compliance with duty of care rules. The court will thus assume causality between the output produced by an AI system or the failure of an AI system to produce the intended output and the wrongdoing of the defendant. The presumption of a causal link could also make it easier to submit claims against damages caused by AI systems.
Overall, industries do not seem very concerned about this initiative since the Directive would not change radically the liability conditions and because companies may rebut the causal link presumption. Such a Directive could also benefit businesses by giving them increased legal certainty regarding their potential liability compared to the unclear existing product liability framework. However, if the European Parliament pushes for strict liability as they did in their report, or if the Commission was to decide after five years that a stricter regime is necessary, companies could feel more the impact of these changes.
For now, businesses should closely analyse the text and the new obligations and await the outcome of the European legislative process, which is likely to take months, to assess the extent of risks that the Directive might bring, especially when it comes to disclosure of trade secrets.