24/08/20

Artificial Intelligence and product liability – on a path to a new regulation?

AI brings a myriad of opportunities to solve complex problems and improve productivity across all sectors of the economy. However, the successful implementation of AI comes with risks and challenges. One issue which needs to be carefully considered by any organisation seeking to adopt AI is product liability.

Although sector-specific regulatory requirements may have to be complied with whenever AI is deployed, there is no specific legislation allocating liability where the use of AI has caused damage.

In principle, the rules laid down in the Belgian Civil code in relation to contractual and extra-contractual damages shall apply to damages caused by AI-systems. However, in some cases these general legal principles may not be sufficient or appropriate to address the (extra-)contractual liability at stake.

Liability of AI could also be established based on the rules of product liability. In Belgium, the Product Liability Act of 1991 is of general application and imposes a strict liability on producers (i.e. without the need to evidence a fault of the producer) for damage caused by "defective products", i.e. a product that is not meeting the safety that a party is entitled to reasonably expect, taking into account all the circumstances. The Product Liability Act is the Belgian implementation of the EU Directive 85/374/EEC on product liability (PLD) setting out the EU-wide no fault liability regime for defective products. The PLD came into force in 1985 and was drafted for the product liability landscape of 35 years ago.

In addition, a claim for damages caused by products can also be based on Article 1641 and following of the Belgian Civil Code.

Given the uptake of new digital technologies during the past decade, legal experts started questioning whether the existing EU and national legal frameworks on product liability are still fit for purpose in the new digital age. Most of them believe it to be necessary to introduce new legislation or make specific targeted amendments to existing legal frameworks in order to cope with legal challenges that might arise by the enhanced use of AI.

At European level several studies and consultations have been conducted on this topic and the contribution below attempts to provide a wrap-up of the key initiatives taken up until now and a brief outlook on what the future might bring.

Initiatives undertaken up until today

On 16 February 2017 the European Parliament adopted the Resolution on Civil Law Rules on Robotics. This resolution proposed a whole range of (non-) legislative initiatives in the field of robotics and AI and, in particular, asked the Commission to submit a legislative proposal providing civil law rules on the liability of robots and AI.

In March 2018, the European Commission set up an Expert Group on Liability and New Technologies operating in two formations: the "Product Liability Directive Formation" and the "New Technologies Formation". On 21 November 2019, the main findings of the "New Technologies Formation" were published in the report "Liability for Artificial Intelligence and other emerging digital technologies". The report adopts a macro level analysis of the liability challenges raised by AI and digital technologies.

On 19 February 2020, the European Commission published the long-awaited White Paper on AI. With this White Paper, the European Commission aims to be at the forefront of the next (industrial) data wave by creating a so-called "ecosystem of excellence" and "ecosystem of trust" to boost the uptake of AI and address the risks associated with certain uses of this new technology (for a summary of the key findings of this White Paper, see here and here). Although the White Paper does not extensively address the issue of liability and AI, it acknowledges, among others, that the existing (EU and national) legal framework could be improved to address the uncertainty regarding the potential legal challenges imposed by AI. It proposes a mix between, on the one hand, targeted amendments to existing EU legislation to cater for the specificities of AI and, on the other hand, adopting a new regulatory framework that would exclusively apply to so-called "high-risk" AI applications (see below). As part of the public consultation on the White Paper, 60.7 per cent of respondents supported a revision of the PLD to cover particular risks engendered by certain AI applications.

Together with the White Paper, the European Commission published the Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics specifically addressing the impact of these technologies on the existing safety and liability framework. The main regulatory changes proposed by this report are:

  • clarifying the scope of the PLD by, among others, extending its scope to stand-alone AI and revising the notion of putting a product into circulation to take into account that products may change over the course of their lifetime (e.g. as a result of the self-learning algorithm);
     
  • considering to reverse or alleviate the burden of proof required by national rules for damage caused by the operation of AI-systems, through an appropriate EU initiative; and
     
  • establishing a strict liability regime for AI-systems with a high-risk profile and coupling it with a mandatory insurance requirement.

The most concrete action from the EU took place on 4 May 2020 when the Committee of Legal Affairs of the European Parliament issued a draft report containing a proposed legislative text for a regulation on liability for the operation of AI-systems. The main items suggested in the proposed regulation can be summarised as follows:

  • the proposed regulation only focuses on claims against the "deployer" of an AI-system, i.e. the person who decides on the use of the AI-system exercises control over the associated risk and benefits from its operation. Producers, manufacturers, developers and backend operators should remain to fall under the (if need be, amended) PLD;
     
  • a twofold liability regime would be created consisting of "high-risk" and "low-risk" AI-systems. An AI-system presents a high risk when its "autonomous operation involves a significant potential to cause harm to one or more persons in a manner that is random and impossible to predict". The annex to the proposed regulation provides a list of AI-systems that are considered to be a high risk as well as critical sectors where they are being deployed (e.g. transportation);
     
  • high-risk AI-systems are subject to a strict liability regime. Other (low-risk) AI-systems remain subject to the fault-based liability;
     
  • only damage to life, health, physical integrity or property is covered by the proposed regulation;
     
  • a deployer of a high-risk AI-system is not able to exonerate him/herself except for force majeure;
     
  • the deployer of a high-risk AI-system mandatorily needs to have a liability insurance with adequate cover taking into account the amounts mentioned below;
     
  • the liability of the deployer for high-risk AI-systems would be capped up to a maximum of EUR 10 million in the event of death or harm to a person's health or physical integrity and of EUR 2 million euros for damage to property. Limitations periods are provided for high-risk systems depending upon the type of damage;
     
  • a deployer of a high or low-risk AI-system risk cannot escape liability on the ground that the harm was caused by an autonomous activity, device or process driven by the AI system. However, the deployer of a low-risk AI-system can refute liability when proving that the harm or damage was caused without his/her fault, relying on the following grounds: (a) the AI system was activated without his knowledge and all reasonable and necessary measures to avoid such activation were taken, or (b) if due diligence was observed by selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates; and
     
  • deployers of AI-systems shall not be entitled to pursue a recourse action unless the affected person who is entitled to receive compensation under the regulation has been paid in full.

Finally, in July 2020, the European Parliament released a study on Artificial Intelligence and Civil Liability that it commissioned from professor Andrea Bertolini. In his report, the professor summarises his comments on the approach followed so far by the EU and proposes some new ideas on how to solve the liability question in relation to AI-systems (for a summary of this report, see here).

Conclusion and some thoughts on the future

The many initiatives clearly show that things are moving ahead at a European level. Nevertheless, some shortcomings remain and several key issues still need to be examined and discussed. This is clearly evidenced by the recent study of professor Bertolini in which he raises some important comments to the approach followed by the EU so far.

In a time of unprecedented digital developments, it is a difficult exercise for the EU to, on the one hand, give businesses and researchers the necessary freedom to innovate and, on the other hand, ensure that there is an appropriate and efficient legal framework protecting the interests of all stakeholders (including consumers). Civil liability rules should always strike a balance between protecting citizens from harm while enabling business to innovate.

While the EU should be wary to overly regulate AI-systems, this should not be an argument to ban new appropriate regulations and/or adjustments to existing regulations for years to come.

It is important that victims of accidents from products and services including emerging digital technologies such as AI do not enjoy a lower level of protection compared to similar other products and services, for which they would get compensation under national tort law. This could reduce societal acceptance of those technologies and lead to hesitance to use them. In addition, challenges of the new technologies to the existing frameworks could also cause legal uncertainty as to how existing laws would apply which in turn could discourage investment as well as increase information and insurance costs for developers and producers.

It however remains to be seen how much time will still be needed to put in place an EU-wide legal framework tackling potential AI product liability issues and when such framework will be implemented into national legislation by the EU member states. Taking into account the many initiatives of the EU conducted so far, we can conclude – at least – that there is concrete desire on the part of the Commission to be a pioneer in creating an appropriate and efficient legal framework to cope with the legal challenges created by the new digital world (as it successfully did with the GDPR).

It is also encouraging to see that recent initiatives launched in Belgium – such as AI 4 Belgium and Kenniscentrum Data & Maatschappij – show that AI and the challenges that come with it are finally on the Belgian political agenda. While there currently is no Belgian legislation dealing with AI, these initiatives are key to bringing together the relevant stakeholders, share knowledge, and educate the public on the benefits and challenges of AI. Like the EU Commission, the Belgian government should aim to be at the forefront of the new emerging technologies so that it can be a pioneer and not just a follower in the digital world of tomorrow.

1PLD currently does not apply to stand-alone software. However, in case the AI software is part of a product and is incorporated in such a way that it is essential to keep the product functioning entirely or partially so that it cannot be considered to be a separate element anymore, PLD will apply.

dotted_texture