27/06/23

The current proposal of the AI-act summarized

Artificial intelligence (AI) is proving to be a new disruptive form of technology that has an increasing influence on its users. In our two previous blogposts, we delved deeper into the ethical considerations and ethical principles to be taken into account. Aside from the ethical concerns, an important additional concern remains, namely the lack of legislation that addresses AI systems as such. The current proposal from the European Commission for a regulation on artificial intelligence, referred to as the “AI-act”, would ensure an AI-climate in which both EU-citizens and AI-developers can thrive with more legal certainty. On 14 June 2023, the European Parliament adopted its first official stance on the Commission’s proposal. Hence, we also give you an overview of some of the important amendments.

Definition of AI

What will fall under the scope of “artificial intelligence” is still a topic of debate. Both the Council and the Parliament of the EU want to alter the definition in the Commission proposal to include a definition that makes a clearer distinction between artificial intelligence and more classic software systems. It is also the goal of the Commission, the Council and the European Parliament to work with a definition that is flexible and future proof, meaning that this regulation should be able to deal with new invented AI-techniques. Pursuing legal clarity and wide acceptance, the European Parliament emphasizes the importance of the harmonisation of the definition with those of other international organizations that are also working on AI. 

Risk-based approach

Once a system is defined as AI, it can be divided in one of four risk categories as set out by the Commission. Based on these different categories, the AI-system is either completely prohibited, is subject to safety and transparency obligations or can be used freely.  

The first category of AI systems under the proposed AI-act are those posing ‘unacceptable risks’. The proposal sets forth a list containing forbidden AI-practices that will not be allowed under any circumstances. For example, using AI systems that apply subliminal techniques or that target vulnerable people is not allowed when there is a chance of causing either psychological or physical harm. This limitative list includes in total four different forbidden AI-practices but can still be subject to change.

The European Parliament has proposed some significant changes to this list. The most debated one is the outright prohibition of using biometric systems for ‘real life’ remote identification purposes in publicly accessible spaces, regardless of the Commission’s proposal to only prohibit it for the purpose of law enforcement. This results in the risk shifting from being ‘high’ to ‘unacceptable’. 

The second category is focused on two main classifications of ‘high-risk’ AI systems. The first being those AI systems that are intended to be used as a safety component of products, or which are themselves products, that are subject to a third-party conformity assessment. The second being those AI systems that are used in pre-defined areas and pose a significant risk for the health and safety of the fundamental rights of EU-citizens. These pre-defined areas are listed in an extra annex to the proposal for the AI-act and are considered to be vulnerable areas. Education, law enforcement and asylum are some of these areas in which certain AI systems shall be considered high-risk.

Also relating to this category, the European Parliament has made some alterations. It added some new areas, now known as ‘critical use cases’, to the extra annex. For instance, AI systems used for the monitoring of prohibited behaviour of students during their exams can now also be considered as being ‘high risk’. Another addition to the annex are those AI systems used for the eligibility of natural persons for health and life insurance. 

The third category focuses on AI systems with ‘limited risk’. The European Commission has created this category for AI that has some degree of risk inherent to the way the system operates and therefore requires an additional transparency obligation. It is directed at those systems that are intended to have direct interactions with natural persons, systems that use emotion recognition or a biometric categorisation system and systems that generate or manipulate images, audio or video content. Thus, for these systems the risk arises that the users can be manipulated. 

The fourth category are those AI systems that pose ‘minimal to no risk’ and can be used freely. It is, however, the goal of the European Commission to stimulate and facilitate the voluntary adoption of codes of conduct for those AI systems that do not fall under the category of high-risk. Those codes of conduct can be based on the 7 safety requirements but can also include additional safeguards with regards to e.g., the environment and people with disabilities.

Safety requirements 

The AI-providers developing high-risk AI systems need to comply with 7 different safety requirements. These 7 security requirements take into account the 7 principles of ethics that we mentioned in our previous blogpost (link: https://www.eylaw.be/2023/06/06/ai-and-ethics-is-the-eu-fulfilling-its-own-ambitions/)

The following 7 requirements are currently on the table: 

  • Risk management system: suitable risk management measures shall be put in place to identify, analyse, evaluate and subsequently eliminate or reduce risks during the lifespan of the AI-system.
  • Data and data governance: the data that a provider uses for the training, validation and testing of the AI-system, must be relevant, representative, free of errors and complete.
  • Technical documentation: providers must draw up the necessary technical documentation before the AI-system is placed on the market or put into service.
  • Record-keeping: the AI systems will have to enable the automatic recording of certain events, like the period a user is using the system or the specific data sets that have led to a result. 
  • Transparency and information to users: AI systems must be designed and developed in such a way that its users will be able to interpret the system’s output and use it appropriately.
  • Human oversight: providers shall have to ensure that AI systems can be effectively overseen by natural persons in order to prevent or minimise risks to health, safety or fundamental rights. 
  • Accuracy, robustness and cybersecurity: the AI systems must be resilient in terms of ensuring availability, reducing incidents and errors or unauthorized access.

The European Parliament has retained the principle of the 7 requirements but has added novelties. For instance in the case of record-keeping, high-risk AI systems must be designed and developed with logging capabilities to enable the recording of the energy consumption and environmental impact of the system during its lifecycle. 

The AI-providers who are developing AI and fall under the scope of limited risk, will be subject to a specific transparency obligation that requires them to inform the users of the fact that they are interacting with an AI-system, or in that the images, audio or video are created by AI. Note that in the event of a high risk system that is intended to interact with natural persons, this specific transparency obligation will also apply. 

For this category, the European Parliament stresses that providing the necessary information on the AI system must be done at the latest at the time of the person’s first interaction or exposure. The information must especially be available for vulnerable people, like those with disabilities, or children. 

Pressing issue of generative AI

Given that the proposal of the AI-act was published by the Commission in 2021, the text did not include any provisions on generative AI. Now that the AI sector is seeing a boost of generative AI products following the launch of the chatbot Chat GPT, both the Council and the Parliament of the EU considered it of importance to include this new type of AI into the legislative framework. 

The European Parliament has therefore proposed specific obligations for the providers of so-called foundation models, e.g. the company OpenAI providing the language model of the GPT-series on which Chat GPT is build. For instance, the providers shall have to demonstrate the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment, democracy and the rule of law. Furthermore, the foundation model must achieve throughout its lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity. After putting the foundation models on the market, the providers shall also have to keep the technical documentation at the disposal of the national competent authorities for at least 10 years.

Specifically for foundation models used in AI systems that are intended to generate content like video, audio, complex text and images, additional obligations shall apply. Firstly, the generative AI system must comply with the specific transparency obligations for ‘limited risk’ systems, as explained above. Secondly, appropriate safeguards must be taken against the generation of any content in breach of European Union law. Lastly, the provider must document and make publicly available a sufficiently detailed summary of the use of training data that is protected under copyright law. 

Governance

The seven safety requirements for high-risk AI systems are part of the mandatory conformity assessment that the AI-providers have to perform. Once this conformity assessment is completed, the AI-providers have to register the AI-system in an EU-database to increase public transparency. Combined with a strong enforcement by the supervising national authorities, who will be able to impose corrective measures and administrative fines, the EU-Commission believes that the AI-act would be an effective but also reasonable solution for the AI-sector.

Other EU-legislation

This proposal is written in such a way that the AI-act is compatible with the already existing EU-legislation, such as legislation on consumer protection, fundamental rights, employment and product safety. In particular, the AI-act complements the legislation on the protection of personal data (i.e. the General Data Protection Regulation (GDPR)). However, both the European Data Protection Board and its Supervisor have raised concerns, stating that in the current proposal there is a blind spot for the GDPR-rules. Both the Council and the Parliament of the EU have recently put in motion certain amendments trying to solve this issue. 

Current procedure

The text of the European Parliament containing the amendments is not yet final. Even though it has been approved during the plenary session of the 14th of June 2023, the text has been sent back to the competent commission of the European Parliament to start interinstitutional negotiations with both the European Council and the Commission.


Authors: Tom Maes and Kelly Matthyssens

dotted_texture