15/04/24

Navigating the AI Act: legal insights and analysis

The rapid development and deployment of artificial intelligence (AI), its increasing use in almost any aspect of our lives and the new regulatory framework presents businesses around the world with numerous challenges. Yesterday, one of the most ambitious projects of law-making, the AI Act, which aims to harmonise the protection of citizens from this rapidly evolving technology and to encourage research and industrial capacity in this regard was approved by a vast majority in the European Parliament. This ground-breaking regulation will significantly reshape how businesses and organisations in Europe use AI.

A long legislative process

The popular buzzword had already caught the attention of lawmakers a few years ago. For several years, the European Commission (EC) had been researching the topic, leading to the first proposal in April 2021, followed by the positions of the Council (December 2022) and Parliament (June 2023). After a political agreement on the text had been reached on 9 December 2023, finally, on 13 March 2024, the European Parliament approved the proposed AI Act

The prolonged legislative process for the AI Act is unsurprising, given its ambitious aims. Regulating AI requires careful consideration of ethical, technological, and societal factors. The extended timeline reflects the complexity of crafting effective legislation to address evolving challenges.

In this article, we delve into the crucial components of the AI Act and its implications for businesses operating within the EU.

Ai system

The ambition of the AI Act is to establish a framework that is futureproof. In a world where technology is constantly evolving, defining what constitutes an ‘AI System’ and thereby determining the scope of application of the AI Act proved one of the most challenging tasks. The European legislator finally opted for a broad and easily understandable definition, that emphasises the autonomy of the system and its ability to adapt

An ‘AI System’ is defined by the AI Act as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Broad scope of application

The obligations foreseen in the AI Act are not only imposed on developers. Instead, the AI Act captures the entire value chain, making providers, distributors and users equally subject to its scope of application. It therefore applies to a wide range of actors, including: 

  • providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, irrespective of whether they are established or located in the EU or in a third country (i.e., outside of the EU);
  • deployers of AI systems that have their place of establishment or are located within the EU, under whose authority and responsibility an AI system is used; 
  • providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used within the EU;
  • importers and distributors of AI systems;
  • product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
  • authorised representatives of providers, which are not established in the EU; and
  • affected persons that are located in the EU.

The scope of application particularly broad given the extraterritorial effect of the AI Act, meaning that actors established in a third country are also subject to the AI Act to the extent the AI systems affect persons located within the EU. Hence, the AI Act is likely to have a similar effect on worldwide AI regulation as the GDPR has had on the worldwide development of data protection regulation.

The final text of the AI Act lists several exclusions to its scope of application. More in particular, the AI Act shall not apply to:

  • areas outside the scope of EU law or in any event, shall not affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States with carrying out tasks in relation to those competences;
  • AI systems where and in so far as they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities;
  • AI systems that are not placed on the market or put into service in the EU, where the output is used in the EU exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities;
  • public authorities in a third country or international organisations falling within the scope of the AI Act, where such authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the EU or with one or more Member States, provided that such a third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals;
  • AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development
  • any research, testing or development activity regarding AI systems or models prior to their being placed on the market or put into service; and
  • obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.

Prohibition of certain ai practices

The AI Act follows a risk-based approach, identifies different risk categories and establishes obligations for AI systems based on their potential risks and level of impact.

The AI Act prohibits certain AI practices that are considered a clear threat to the safety, livelihoods, and rights of people, including:

  • behavioural manipulation and circumvention of free will by appreciably impairing their ability to make an informed decision; 
  • exploitation of people's vulnerabilities, such as their age, disability or a specific social or economic situation, with the objective or the effect of materially distorting the behaviour of those people in a harmful way; 
  • social credit scoring systems;
  • specific predictive policing applications; 
  • untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases;
  • emotion recognition in the workplace and schools;
  • biometric categorisation systems based on sensitive characteristics; and
  • law enforcement use of real-time biometric identification in public, except for a limited number of authorised objectives. 

High-risk ai systems

High-risk AI systems will be subject to a comprehensive mandatory compliance regime.  

An AI system could be considered high risk when deployed in any of the following areas as described in Annex III:

  • biometrics;
  • critical infrastructure;
  • education or vocational training;
  • employment, workers management and access to self-employment;
  • access to and enjoyment of essential private services and essential public services and benefits;
  • law enforcement;
  • migration, asylum and border control management; and
  • administration of justice and democratic processes.

Moreover, AI Act recognises that AI systems may be an AI system and another form of product at the same time. Where those products are already subject to certain EU regulation, the EU AI Act provides for AI systems constituting such products to be considered ‘high-risk’ AI systems, as is the case for medical devices, vehicles and planes, toys etc. Similarly, in case of AI systems that can also be used as a safety component of a product that is subject to such EU regulation and are required to undergo a third-party conformity assessment before they are placed on the market or put into service in the EU under that legislation, the AI system safety component for that product will be automatically considered to be a ‘high-risk’ AI system.

High-risk AI systems will be subject to specific obligations, including:

  • establishing, implementing, documenting and maintaining a risk management system;
  • relying upon training, validation and data sets that meet certain quality criteria and are subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system;
  • drafting technical documentation to demonstrate compliance before the system is placed on the market or put into service and keeping such documentation up to date;
  • establishing automatic recording of events (‘logs’) over the lifetime of the high-risk AI system;
  • being designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately and being accompanied by instructions for use;
  • ensuring human oversight;
  • being designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle;
  • additional requirements towards the providers of high-risk AI systems, such as the establishment of a quality management system and documentation keeping.

Transparency obligations

Besides the specific obligations for high-risk AI systems, the AI Act foresees additional transparency obligations  for providers and deployers of certain AI systems towards natural persons:

  • Direct interaction with AI system: If an AI system is intended to interact directly with natural persons, such as a chatbot, the natural person shall be informed thereof, unless  this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use;
  • AI generated content: Providers of AI systems generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are labelled as artificially generated or manipulated;
  • Emotion recognition system / biometric categorisation: Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with the applicable data protection laws.
  • Deep fakes: Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. 

General-purpose ai

AI models that display significant generality and are capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, so-called general-purpose AI models, must meet certain requirements, including:

  • keeping technical documentation of the model, its training and testing process and results of its evaluation;
  • publishing detailed summaries of the content used for training, according to a template provided by the AI Office;
  • putting in place a policy to comply with EU copyright law.

Additionally, for general-purpose AI models with systemic risk, additional requirements are imposed, including:

  • conducting model evaluations;
  • assessing and mitigating systemic risks;
  • reporting to the AI Office on serious incidents; and
  • ensuring cybersecurity.

Measures in support of innovation

Interestingly and in line with the EC’s ambition to make the EU a worldwide leader in AI and to support SMEs, while the AI Act aims at laying down a regulatory framework for AI, it also contains measures in support of innovation. Such measures include (i) AI regulatory sandboxing schemes, (ii) measures to reduce the regulatory burden for SMEs and start-ups and (iii) real-world testing

Sanctions

Non-compliance with the rules laid down in the AI Act can give rise to strict enforcement measures, including administrative fines, warnings and non-monetary measures. The penalties provided for must be effective, proportionate and dissuasive and must take into account the interests of SMEs, including start-ups, and their economic viability. 

The fines that could be imposed are even higher than the GDPR, as the administrative fines could be up to EUR 35 000 000 or, in the case of an undertaking, up to 7% of the total worldwide annual turnover of the preceding financial year, whichever is higher. 

Next steps

As there are still some errors in the different language versions, the AI Act will be subject to a corrigendum procedure and will receive a linguistic review. It is expected to be finally adopted before the end of the legislature but still needs to be formally endorsed by the Council.

The AI Act will enter into force twenty (20) days after its publication in the Official Journal. The AI Act will apply depending on the obligations itself, but be fully applicable 24 months after its entry into force. For some topics, the application date will be different (mostly shorter), such as for:

  • bans on prohibited practises: 6 months after entry-into-force; 
  • codes of practice: 9 months after entry-into-force; 
  • general-purpose AI rules including governance: 12 months after entry-into-force; and 
  • obligations for high-risk systems: 36 months after entry-into-force.

Lydian webinar

If you seek further insight into the specifics of the AI Act and the obligations it imposes, Lydian is hosting a webinar on 28 March 2024 at 11:30 dedicated to this new regulatory framework. Join us for a deep dive into the AI Act by subscribing here.

Bastiaan Bruyndonckx
Liese Kuyken

dotted_texture