The European Regulation on Artificial Intelligence: AI Act

The AI Act is the European regulation on artificial intelligence, recently published on the Official Journal of the EU. The Regulation lays down rules on the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union. The objective of the AI Act is to promote the development of AI technologies while ensuring the protection of the rights recognised by Union Law, particularly those enshrined in the Charter of Fundamental Rights of the European Union.



What is the AI Act? intro

The AI Act is the new european legal framework on artificial intelligence: its main objective is to promote the development of AI technologies and products while ensuring the protection of health, safety and the rights recognised by Union Law. To this aim, the AI Act follows a risk based approach, which means stricter requirements for certain AI systems depending on the risks they involve.

Who does the AI Act apply to?

The AI Act applies to:

  • providers
  • deployers
  • importers and distributors
  • authorised representatives of providers not established in the EU

of AI systems.

The AI Act also applies to:

  • product manufacturers if they place an AI system under their own name;
  • the persons affected by AI systems in the UE.

Who is the deployer?

‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

Article 3 (4)

AI Act: a risk-based approach

The AI Act provides requirements for products, and obligations for subjects involved in the AI value chain (such as providers and deployers), based on the risk stemming from the use made of AI. In other words, the AI Act regulates AI systems depending on their specific purpose, and the level of risk associated with it.

  • Some AI practices are prohibited because they create an unacceptable risk: this is the case for uses that are incompatible with the law of the European Union or violate fundamental rights.
  • Some AI systems are considered high risk: they are subject to stricter requirements and can be released on the market only after a conformity assessment.
  • In other cases the risk is deemed low and it mostly involves transparency obligations.

Lastly: another level of risk concerns the systemic risk posed by general purpose AI models.


schema sull'approccio basato sul rischio seguito dall'AI Act
From the website of the European Commission

Unacceptable risk: when AI is prohibited Article 5

The AI Act prohibits the use of artificial intelligence for the following purposes:

  • manipulating persons or exploiting vulnerable subjects like children or people with disabilities, exposing them or other to psycological or physical harm;
  • social scoring: the use of artificial intelligence by public authorities for evaulating the social behaviour of persons and assign them a score upon which unfavourable treatments are inflicted. That is particularly the case when the retribution for a social behaviour is disproportionate or unjustified, for example when the conduct is legitimate.
  • assess or predict the risk of a natural person committing a criminal offence;
  • create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
  • categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation;
  • infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system isintended to be put in place or into the market for medical or safety reasons;
  • real time remote biometric identification for the purposes of law enforcement, unless that is necessary for the following objectives:
    1. the search for specific victims of crime, including missing persons;
    2. the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons;
    3. the localisation or identification of a person...for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences which enable an european arrest warrant according to Council decision 2002/584/JHA of 13 June 2002.

What is remote biometric identification?

‘biometric identification’ means the automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database"

Article 3 (35)

The AI Act distinguishes between:

  • real time remote biometric identification
  • post remote biometric identification.

Real time remote biometric identification

in the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays.

recital 17

Post remote biometric identification

In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay.

recital 17

The AI Act requires that the use of post-remote biometric identification system is subject to safeguards and used in a way that is propoportionate, legitimate and strictly necessary; in the context of law enforcement, these systems should not be deployed to enable indiscriminate surveillance.

Therefore, article 26 requires deployers of AI systems for post-remote biometric identification, in the context of law enforcement, to request an authorization by a judicial authority, before the system is used or no later than 48 hours.

High risk AI systems

As we said earlier, the Regulation targets certain AI systems with specific provisions depending on their purpose. Some systems are considered high risk, particularly for health and fundamental rights of natural persons: they need to comply with stricter requirements and undergo a conformity assessment before they can be placed on the market.

An AI system is classified as "high risk" when:

  1. is used as safety component of a product, or the AI systems itself is a product, that
    • falls within the scope of the Union harmonisation legislation listed in Annex I AND
    • requires a 3rd party conformity assessment before it can be placed on the market;
  2. is one of the high risk AI systems listed in Annex III.

Requirements for high risk AI systems

High risk AI systems are subject to stricter and specific requirements: the first one is to be equipped with a risk management system (article 9) which:

  1. analyzes the risks that may arise from the use, and even reasonably foreseeable misuse, of the system;
  2. identifies risk management measures appropriate for the risks posed by the AI system;
  3. needs to be updated throughout the lifecycle of the high risk AI system.

Other requirements are set out by articles:

  1. Data and data governance
  2. Technical documentation
  3. Record keeping
  4. Transparency and provision of information to users
  5. Human oversight
  6. Accuracy, robustness and cybersecurity.

Obligations of providers of high risk AI systems

The obligations of providers of high risk AI systems are listed by article 16, and detailed in the following articles, from 17 to 30.

The AI Act requires the provider of an high risk AI system to:

  • register the system in the EU database for high risk AI systems referred to in article 71.
  • provide a quality management system ensuring the compliance with the regulation, detailing policies, procedures and instructions. Article 17 provides a list of elements that the quality management system needs to include.
  • keep the documentation listed in article 18 at the disposal of the national authorities for 10 years after the AI system is placed on the market;
  • keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law
  • if the provider considers that a system already on the market no longer complies with the Regulation, the provider recalls the system or takes appropriate corrective measures to restore compliance;
  • cooperate with the competent authorities and provide upon request all the information proving that the AI system complies with the Regulation;
  • providers based in non EU countries appoint an authorised representative established in the EU.
  • ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, draw up an EU declaration of conformity (article 47) and apply the CE marking of conformity (article 48) on the AI system.
  • establish a post-market monitoring system (article 72) which collects and analyses data on the performance of the AI system and is based on a dedicated post-market monitoring plan.
  • In case of serious incidents or malfunctioning, Article 73: Providers of high-risk AI systems placed on the Union market shall report any serious incident or any malfunctioning of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the market surveillance authorities of the Member States where that incident or breach occurred.

Placing a high risk AI system on the market (infographic)

infographic on steps to place an high risk AI system on the market
From the website of the EU Commission

Transparency obligations in the AI Act

Article 50 provides transparency obligations for certain AI systems.

  • AI systems intended to interact with natural persons: they need to inform the users that they are interacting with an AI system. This, unless the use of AI systems is authorised by law to investigate and prosecute criminal offences.
  • content created by AI systems needs to be marked in a machine-readable format and detectable as artificially generated or manipulated. That is worth noting that article 50 also adds that the technical solutions enabling the content to be marked shall take into account:
    1. the specificities and limitations of various types of content;
    2. the costs of implementation;
    3. the generally acknowledged state of the art.
    There are 2 exceptions to this requirement: content produced by AI does not need to be marked when
    1. the AI system does not alter the input data provided by the deployer, for instance when performing an assistive function for standard editing;
    2. authorised by law to detect, prevent, investigate or prosecute criminal offences.
  • emotion recognition system or a biometric categorisation system: just like the previous point, the persons exposed to these tools need to be informed of their AI nature, again unless there is an investigation or prosecution activity of criminal offences.
  • Deep fake : systems that generate or manipulate image, audio or video content making them look real and authentic while they are not. Users need to be informed that the content has been manipulated by artificial intelligence. This case too has exceptions in case:
    • the use of AI systems for generating deep fakes is authorised by law to detect, prevent, investigate and prosecute criminal offences;
    • the content is generated as part of a artistic or creative work: the fact that the content has been generated by AI can be disclosed in a way that does not hamper the display or enjoyment of the work.
  • Lastly: article 50 requires that text published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This, unless:
    1. the AI sistem is authorised by law to detect, prevent, investigate or prosecute criminal offences or
    2. the text generated by AI has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.

General purpose AI models

What is a general purpose AI model?

A general purpose AI model is a model that displays significant generality and is capable of competently performing a wide range of distinct tasks...and that can be integrated into a variety of downstream systems (Article 3).

Recital 97 clarifies that:

  • Although AI models are essential components of AI systems, they do not constitute AI systems on their own.
  • AI models require the addition of further components, such as for example a user interface, to become AI systems.
  • AI models are typically integrated into and form part of AI systems.

Obligations of providers of general purpose AI models

Article 53 sets out the obligations for providers of general purposes AI models:

  • draw up the technical documentation of the model, which is made available upon request to authorities and also to providers of AI systems who want to integrate the model in their systems.
  • put in place a policy to comply with Union law on copyright and related rights;
  • draw up and make publicly available a detailed summary on the content used for training the model.

Systemic risk

The AI Act provides specific rules for general purpose AI models posing systemic risk. An AI model is classified as with systemic risk when it has high impact capabilities.

‘systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole.

Article 3 (65)

The high-impact capabilities of the AI model, and so the systemic risk, is presumed based on the amount of resources utilized for training the model.

A general-purpose AI model shall be presumed to have high impact capabilities...when the cumulative amount of computation used for its training measured in floating point operations is greater than 1025

Article 51

The Commission may classify an AI model as with systemic risk when, regardless of the threshold, the modal is considered having an equivalent impact capabilities based on the criteria set out in Annex III

The AI Act requires providers of general purpose AI models with systemic risk to:

  • assess and test the model in order to identify and mitigate systemic risk, including at the Union level, stemming from the development and placing on the market of the AI model
  • report to the competent authorities serious incidents and possible corrective measures to address them.

Governance

The AI Act establishes authorities and bodies for the implementation of the Regulation. The governance is shared between the Union level and the Member States.

Governance at Union level

  • AI Office: established by Commission Decision of 24 January 2024 C/2024/390. The Commission shall develop Union expertise and capabilities in the field of AI through the AI Office (article 64).
  • European Artificial Intelligence Board (to be established): it will be composed by a representative for each Member State (Article 66). Tasks of the board:
    • ensure the consistent application of the AI Act across the EU;
    • assist the Commission and the national authorities on any issue related to the AI Act;
    • foster the cooperation between national authorities and the Commission.
  • Advisory forum: composed by a selection of stakeholders, including industry, start-ups, SMEs, civil society and academia, and represents commercial and non-commercial interests. The advisory forum will provide advice on technical matters to the Board and the Commission (article 67).
  • Scientific panel of independent experts (article 68): it will advice the AI Office on:
    • the implementation of the Regulation with regard to general purpose AI models and systems;
    • providing support to market surveillance authorities, when they request it.

National competent authorities

Each Member State shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purposes of this Regulation.

Article 70

Therefore, as national authorities, Member States are required to designate at least:

  • one notifying authority: the national authority which designate and monitor the conformity assessment bodies, that is the bodies performing third-party conformity assessment activities, including testing, certification and inspection. The conformity assessment concerns the compliance of the high-risk AI system with the Regulation.
  • one market surveillance authority: the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020. (Article 3 - 26)

Among the tasks of national competent authorities, detailed by article 70:

National competent authorities may provide guidance and advice on the implementation of this Regulation, in particular to SMEs including start-ups, taking into account the guidance and advice of the Board and the Commission, as appropriate.

Codes of conduct

The AI Act promotes the voluntary extension to non high-risk AI sistems of the requirements provided for high-risk AI systems. To this aim, the AI Office and Member States encourage the drawing up and adoption of codes of conduct setting higher standards of compliance, on the basis of clear objectives and key performance indicators to measure the achievement of those objectives. (Article 95).

Who will draw up codes of conduct?

Codes of conduct may be drawn up by individual providers or deployers of AI systems or by organisations representing them or by both, including with the involvement of any interested stakeholders and their representative organisations, including civil society organisations and academia.

Article 95

AI regulatory sandboxes

Article 57 requires Member States to establish at least one AI regulatory sandbox at national level by 2 August 2026.

What is a regulatory sandbox?

AI regulatory sandbox’ means a controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision;

Article 3 (55)

Therefore, the goal of an AI regulatory sandbox is to allow and support the development of AI systems which comply with the applicable rules.

Competent authorities follow the work of regulatory sandboxes and provide assistance, particularly on the following topics:

  • identify risks to fundamental rights, health, safety, and possible countermeasures;
  • how to fulfil the requirements and obligations set out in the Regulation.

Competent authorities may also provide to the participants of the sandbox a report on the activities successfully carried out in the sandbox. Providers may use this report to demonstrate their compliance with the Regulation when dealing with the conformity assessment process or with market surveillance authorities. Therefore, participating in a sandbox may result in a "smoother", and so faster, conformity assessment process.

Article 58 provides for detailed arragements for, and functioning of, AI regulatory sandboxes.

Penalties

Member States will lay down rules on sanctions and ensure that they are implemented. Penalties may also include warnings and non-monetary measures, applicable to infringements of this Regulation by operators.

Article 99 sets some limits to the amount of monetary sanctions:

  • engage in prohibited AI practices: article 5. Monetary sanctions can go up to 35 millions of € or the 7% of the total worldwide annual turnover for the previous year.
  • violations of the obligations for:
    • providers (article 16;)
    • authorised representatives (article 22);
    • importers (article 23);
    • distributors (article 24);
    • deployers (article 26);
    • transparency obligations for providers and deployers (article 50).
    monetary sanctions can go up to 15 millions of € or 3% of the total worldwide annual turnover for the previous year.

Fines for providers of general purpose AI models: they are imposed by the Commission and can can go up 15 millions of € or 3% of the total worldwide annual turnover for the previous year.

Lastly, sanctions for giving incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request: sanctions can go up to 7.5 millions of €, or 1% of the total worldwide annual turnover for the previous year.

Conclusions

The AI Act attempts to balance the development of a new and disruptive technology with the fundamental rights and principles protected by existing EU legislation. That is not an easy task as it involves some of the difficulties often faced when regulating a new technology, particularly how to regulate without stifling innovation. We shall see whether the AI Act will succeed in its mission or will it rather result in yet another case of overregulation: ineffective in relation to its goals, and costly in terms of competitivity.


About the author

Vincenzo Lalli

Vincenzo Lalli

Founder of Avvocloud.net

Avvocloud is an Italian network of lawyers passionate about law, innovation and technology.
Feel free to reach out for any info: send a message.

Thanks for reading!

Creative Commons License