02/11/20

European Parliament Proposal for a future-oriented civil liability framework

A new step on the path of AI-related regulation has recently been initiated. On 20 October 2020, the European Parliament ("EP") adopted three reports outlining how the EU can best regulate AI while, at the same time, boosting innovation, ethical standards and trust in technology.

a) Ethics framework for AI. The EP asks the European Commission to present a new legal framework setting out the ethical principles and legal obligations to be complied with when developing, deploying and using AI, robotics and related technologies in the EU, including software, algorithms and data.

b) Intellectual property rights. The second recommendation deals with intellectual property rights for the development of artificial intelligence technologies. Interesting to see is that the proposal specifies that AI should not have legal personality.

c) Civil liability regime for AI. The EP's report calls for a clear and coherent EU civil liability regime for AI in order to provide legal certainty for both consumers and businesses and, ultimately, to boost the uptake of AI.

In this article we will dive into the civil liability regime for AI as envisaged by the EP.

INTRODUCTION

In order to address the potential liability issues associated with the use of AI, the EP suggests a twofold approach: on the one hand, a revision of the current EU Directive 85/374/EEC on product liability ("PLD") and, on the other hand, the adoption of a new regulation setting out the liability rules to apply to AI operators.

REVISION OF THE PRODUCT LIABILITY DIRECTIVE

In line with the White Paper's recommendation, the EP states that there is no need for a complete revision of the PDL, which has proven to be an effective means of getting compensation for damage triggered by a defective product. Instead, the EP believes that a revision of the PLD should be sufficient to address civil liability claims of a party who suffers harm or damage.

More in particular, the EP proposes to amend the PLD by clarifying that the definition of "products" includes digital content and digital services and adapting existing concepts such as "damage" "defect" and "producer". For instance, the concept of "producer" should incorporate manufacturers, developers, programmers, service providers and back-end operators.

Also, the EP asks the Commission to assess whether the PLD should not be transformed into a Regulation (contrary to a Directive, a Regulation is a binding legislative act which must be applied in its entirety across the EU.)

ADOPTION OF A FUTURE REGULATORY FRAMEWORK

a) Scope of application and definitions

The EP proposes that the new regulation shall set out rules for the civil liability claims of natural and legal persons against operators of AI-systems. The new regulation should apply to the territory of the European Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity, property or has caused significant immaterial harm resulting in a verifiable economic loss.

It is suggested that under this new regulation, the concept of "operator" includes both the front-end and the back-end operator. The "front-end" operator should be defined as the natural or legal person who exercises a degree of control over a risk connected with the operation and functioning of the AI-system and benefits from its operation. The "back-end" operator should be defined as the person who, on a continuous basis, defines the features of the technology, provides data and essential back-end support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system.

b) Different liability rules for different risks

The EP proposes to create a twofold liability regimes consisting of "high-risk" and "low-risk" system. An AI-system presents a high-risk when its autonomous operation involves a significant potential harm or damage to one or more persons in a manner that is random and goes beyond what can reasonably be expected.

The idea is to list all high-risk AI-systems and all critical sectors where they are used in an annex to the new regulation. Given the rapid technological developments, the EP proposes that the Commission should review such annex at least every six months.

The common principle for operators of both high- and low-risk AI-systems is that they cannot escape liability on the ground that the harm was caused by an autonomous activity, device or process driven by the AI system.

The main difference between operators of high-risk AI-systems and operators of low-risk AI systems would be that the latter can escape liability if he/she can proof that the harm or damage was caused without his/her fault. More in particular, he/she should proof that: (a) the AI system was activated without his/her knowledge and all reasonable and necessary measures to avoid such activation were taken, or (b) he/she acted diligently when selecting the suitable AI-system for the right tasks and skills, putting the AI-system into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates.

In summary, operators of high-risk AI-systems shall be subject to a strict liability regime and shall not be able to exonerate themselves save for force majeure.

c) Joint and several liability

If multiple operators are involved, they should be jointly and severally liable, it being understood that they will have the right to recourse proportionately against each other provided the affected person was compensated in full.

The proportions of liability should be determined by the respective degrees of control the operators had over the risk connected with the operation and functioning of the AI-system.

d) Insurance and AI aspects

Operators of a high-risk AI-system mandatorily need to have an appropriate liability insurance with adequate cover taking into account the amounts mentioned in the proposed regulation. The liability of the operator for high-risk AI-systems would be capped at:

  • EUR 2 million in the event of death or harm to a person's health or physical integrity;
  • EUR 1 million for damage to property or significant immaterial harm that results in a verifiable economic loss.

Limitations in time will depend on the type of damage, this without prejudice to national law regulating the suspension or interruption of limitation periods.

NEXT STEPS

With these three legislative recommendations, the EP opened the discussion on AI regulation and invites the Commission to submit a legislative proposal in line with its recommendations. Following the closure of the public consultation on the White Paper on AI, the European Commission's legislative proposal is expected to be issued during the first quarter of 2021.

dotted_texture