The European Regulation on Artificial Intelligence: AI Act

The AI Act is the proposed european legal framework on artificial intelligence: the objective of the regulation is to balance the development of AI technologies with the fundamental rights of the EU and ensure the proper functioning of the European internal market. The draft has been approved by the European Parliament on June 14 2023; the legislative process now continues with the negotiations among the Commission, the Parliament and the European Council, known as "trilogues", which will eventually lead to the approval of the AI Act and to a european regulation on artificial intelligence.



AI Act: a risk-based approach

The AI Act provides requirements for products, and obligations for suppliers, based on the risk associated with the use made of artificial intelligence. In other words, the AI Act regulates AI systems depending on their specific purpose, and how "risky" it is.

  • Some uses of AI are prohibited because they create an unacceptable risk: this is the case for uses that are incompatible with the law of the European Union or violate fundamental rights.
  • Some other applications are considered high risk : they need to meet specific requirements and can be released on the market only after a conformity assessment.
  • Lastly, in other cases the risk is deemed low and it mostly involves an obligation to transparency for AI systems.

When artificial intelligence is prohibited

The AI Act prohibits the use of artificial intelligence for:

  • manipulating persons or exploiting vulnerable subjects like children or people with disabilities, exposing them or other to psycological or physical harm;
  • social scoring: the use of artificial intelligence by public authorities for evaulating the social behaviour of persons and assign them a score upon which unfavourable treatments are inflicted. That is particularly the case when the retribution for a social behaviour is disproportionate or unjustified, for example when the conduct is legitimate.
  • remote biometric identification: permitted only in real time in the following cases:
    1. the targeted search for specific potential victims of crime, including missing children;
    2. the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;
    3. the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence which enables an european arrest warrant.

AI and remote biometric identification

In the limited cases in which AI systems intended for remote biometric identification are allowed, they are included among high risk systems because of the discriminatory effects stemming from potential technical inaccuracies of these systems. This according to recital n° 33.

What is remote biometric identification in the AI Act?

Recital n° 8 defines a remote biometric identification AI system:

"AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used."

recital n° 8 - AI Act

The AI Act distinguishes between:

  • real time remote biometric identification
  • post remote biometric identification.

Real time remote biometric identification

According to recital n° 8:

in the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays.

Post remote biometric identification

In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay.

The post remote biometric identification was initially prohibited by the first draft of the regulation; in the current version of the text, approved by the European Parliament on June 14, it is permitted only when there is a pre-judicial authorisation in the context of law enforcement, when strictly necessary for the targeted search connected to a specific serious criminal offense that already took place. This according to recital 18 as modified by amendment 41.

What is high risk in the AI Act?

As we said, the AI Act targets certain systems with specific provisions depending on their purpose. Some systems are considered high risk: they need to comply with stricter requirements and undergo a conformity assessment before they can be placed on the market. The high risk in the AI Act is the danger posed by such systems to health and fundamental rights of natural persons.

What are the high risk AI systems?

The AI Act identifies 2 categories of high risk AI systems:

The Commission may update the list provided by the Annex III, and add new high risk systems, in case new risks for health, safety or fundamental rights will arise from new uses of AI in the future.

Requirements for high risk AI systems

The first requirement for high risk AI systems is to establish a risk management system which:

  1. analyzes the risks that may arise from the use, and even reasonably foreseeable misuse, of the high risk system;
  2. identifies risk management measures appropriate for the particular risks that the AI system involves;
  3. needs to be updated throughout the lifecycle of the high risk AI system.

Other requirements are set out by articles:

  1. Data and data governance
  2. Technical documentation
  3. Record keeping
  4. Transparency and provision of information to users
  5. Human oversight
  6. Accuracy, robustness and cybersecurity.

Obligations of providers of high risk AI systems

The obligations of providers of AI high risk systems are listed by article 16, and detailed in the following articles, from 17 to 30.

The provider is required to:

  • provide a quality management system ensuring the compliance with the regulation, detailing policies, procedures and instructions. Article 17 then lists the aspects that the quality management system needs to include.
  • draw up the technical documentation and provide information so to enable the competent authorities to assess the compliance of the high risk system to the regulation. The technical documentation has to be drawn up before the system is placed on the market.
  • obtain a conformity assessment, draw up an EU declaration of conformity and apply the CE marking of conformity on the AI system.
  • keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law
  • if the provider considers that a system already on the market does not comply or no longer complies with the regulation, the provider is required to recall the system or take appropriate corrective measures to ensure the system is compliant;
  • in case the system poses a risk to the health, safety or the fundamental rights of persons, the provider shall inform the national competent authorities of the Member States where the system has been released of that risk and of the corrective measure taken.
  • register the high risk AI system in the EU database provided by article 60 for such systems.
  • upon request by national competent authorities, provide all the information necessary to demonstrate the compliance with the regulation.
  • In case of serious incidents or malfunctioning, Article 62: Providers of high-risk AI systems placed on the Union market shall report any serious incident or any malfunctioning of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the market surveillance authorities of the Member States where that incident or breach occurred.

Transparency obligations

Article 52 provides transparency obligations for certain AI systems:

  • AI systems intended to interact with natural persons: they need to inform the users that they are interacting with an AI system. This, unless the use of AI systems is authorised by law to investigate and prosecute criminal offences.
  • emotion recognition system or a biometric categorisation system: just like the previous point, the persons exposed to these tools need to be informed of their AI nature, again unless there is an investigation or prosecution activity of criminal offences.
  • Deep fake : systems that generate or manipulate image, audio or video content making them look real and authentic while they are not. Users need to be informed that the content they are watching has been manipulated by artificial intelligence. This case too has exceptions in case the use of such systems:
    • is authorised by law to detect, prevent, investigate and prosecute criminal offences OR
    • it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU.

National authorities and the European Artificial Intelligence Board

The AI Act shares the governance between the EU and the Member States:

  • Member States will establish national authorities in charge of the application and implementation of the regulation within the state.
  • The EU establishes a European Artificial Intelligence Board with the following tasks:
    • ensure the consistent application of the regulation across the EU;
    • assist the Commission and the national authorities on any issue related to the AI Act;
    • foster the cooperation between national authorities and the Commission;

Foundation models and generative AI

As we have already seen, the AI Act regulates AI applications depending on their specific purpose and the risk it entails. This approach did not consider those versatile, general purpose AI systems that grew and became popular over the last few months. Therefore, on June 14 the European Parliament approved the draft with some amendments introducing specific provisions for general purpose AI systems and foundation models.

General purpose AI

The amendment 169 adds an Article 1 - paragraph 1 - point 1 d) which defines a general purpose AI system as a

"system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed;"

Foundation models

Generative AI applications are the product of a foundation or base model; for instance, ChatGPT is built on the foundation model GPT-3.5.

‘foundation model’ means an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks;

Amendment 168: Article 3 – paragraph 1 – point 1 c

AI systems with specific intended purpose or general purpose AI systems can be an implementation of a foundation model, which means that each foundation model can be reused in countless downstream AI or general purpose AI systems. These models hold growing importance to many downstream applications and systems.

Amendment 99 - Recital 60 e

Obligations of the provider of a foundation model

A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article...

Amendment 399 - Article 28 b

The new article 28 b introduces obligations for providers of foundation models, which are similar to those of providers of high risk AI systems, such as draw up technical documentation, set up a quality management system, register the foundation model in the EU database for high risk AI systems.

Furthermore, the provider of a foundation model is required to:

  • identify, reduce and mitigate reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development with appropriate methods such as with the involvement of independent experts
  • incorporate only datasets that are subject to appropriate data governance measures for foundation models, in particular measures to examine the suitability of the data sources and possible biases and appropriate mitigation
  • design and develop the foundation model so to achieve appropriate levels of
    • performance
    • predictability
    • interpretability
    • corrigibility
    • safety
    • cybersecurity
    through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development;
  • design and develop the foundation model, making use of applicable standards to reduce energy use, resource use and waste, as well as to increase energy efficiency.

Additional obligations for providers if the foundation model is used for generative AI:

  • transparency obligations provided by article 52;
  • train, design and develop the foundation model so to ensure adequate safeguards against the generation of content in breach of Union law.
  • publish a document on the use made of copyrighted material to train the model, if that is the case.

Codes of conduct

The AI Act encourages the adoption of codes of conduct that voluntarily commit to higher compliance standards than those mandated. This may be the case when the provider adopts a code of conduct which sets for a non high-risk system the stricter requirements provided for high-risk ones.

The AI Act also promotes the drawing up of codes of conducts committing to requirements related to environmental sustainability, accessibility for persons with a disability, stakeholders participation in the design and development of the AI systems and diversity of development teams. Codes of conduct shall also set clear objectives and key performance indicators by which evaluate the effectiveness of the code of conduct in relation to its objectives.

As to the authors of the codes of conduct, the AI Act provides that:

Codes of conduct may be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.

Article 69 - AI Act

Conclusions

The AI Act attempts to balance the development of a new and disruptive technology with the fundamental rights and principles protected by existing EU legislation. That is not an easy task as it involves some of the difficulties often faced when regulating a new technology, particularly how to regulate without stifling innovation. We shall see whether the AI Act will succeed in its mission or will it rather result in yet another case of overregulation: ineffective in relation to its goals, and costly in terms of competitivity.


About the author

Vincenzo Lalli

Vincenzo Lalli

Founder of Avvocloud.net

Avvocloud is an Italian network of lawyers passionate about law, innovation and technology.
Feel free to reach out for any info: send a message.

Thanks for reading!

Creative Commons License