Garrigues Digital_

Legal innovation in Industry 4.0

 

Garrigues

ELIGE TU PAÍS / ESCOLHA O SEU PAÍS / CHOOSE YOUR COUNTRY / WYBIERZ SWÓJ KRAJ / 选择您的国家

Publication of the Artificial Intelligence Act: the countdown commences for its complete entry into force

This European regulation, which marks a first worldwide, includes, apart from control mechanisms, measures to promote the development of these technologies sustainably.

The Official Journal of the European Union (OJ) has published Regulation (UE) 2024/1689  of the European Parliament and of the Council, of 13 June 2024, laying down harmonized rules on artificial intelligence (the Act). The Act is the first primary legislation worldwide on this important subject and it will condition economic and social development in years to come.

The Act places the EU at the forefront of the regulation of AI (the other important regulation, in the US, the Executive Order by President Joe Biden of October 30, 2023, is much less intense as far as its obligations are concerned and does not stem from a Parliamentary source). However, the Act does not only provide control and regulatory mechanisms, it also, and this should be underscored, establishes promotional measures (such as regulatory sandboxes and measures to support the development of AI systems by SMEs) aimed at encouraging the development of these sustainable technologies in social terms.

The complex entry into force

The publication of the definitive text in the OJ not only reveals the definitive version of the legal text, but also sets in motion the mechanism for its complete entry into force. The Act will not be applicable in general until 2 August 2026, for 24 months from its entry into force (which will take place twenty days following its publication), but some of its provisions will be applicable at different times. It should noted in this respect that:

  • The prohibition on certain AI-related practices shall apply from 2 February 2025.
  • The provisions regarding notified bodies, general-purpose AI models that pose systemic risks, the AI governance system in Europe and a large part of the panoply of penalties, shall apply from 2 August 2025. Consequently, the organizational base will be ready for when the more substantial obligations become enforceable.
  • Finally, the regulation of certain high-risk AI systems (safety components of products, or which are themselves products and require a conformity assessment in order to be placed on the market or put into service i.e. machinery, toys, lifts or medical devices) shall apply from 2 August 2027.

Material scope

Regarding the material scope, the Act emanates from a broad definition of artificial intelligence and leaves outside its scope of application, only certain specific manifestations of those systems. Principally the following systems fall outside the scope of the Act: (i) systems that are placed on the market, put into service, or used with or without modification for military, defense or national security purposes, (this exclusion extends to the use of the results of AI systems’ output that are not placed in the market or put into service in the EU); and (ii) AI systems or models, including their output results, specifically developed and put into service for the sole purpose of scientific research and development. The Act clarifies that it will not be applied to any scientific research and development activity on AI systems or models prior to being placed on the market or put into service, although the exclusion does not include testing in real world conditions.

Basic content of the Act: control and regulatory mechanism

It is necessary to analyze, albeit briefly, the definitive content of the Act which has varied, at times considerably, throughout its processing.

In the area of control and regulation, the AI Act classifies artificial intelligence systems according to the risk they may generate and their uses.

It establishes three levels of risk for systems and one specific level for general-use models.

1) Unacceptable risk: this category is applied to a very limited set of particularly harmful AI practices which are contrary to EU values because they breach fundamental rights and are therefore prohibited. These prohibited uses or practices (set out in article 5 of the Act) include, for example, social scoring for public and private purposes, the use of subliminal techniques, taking advantage of the vulnerabilities of individuals , biometric categorization of individuals, or the recognition of emotions of natural persons in the workplace and education institutions (except for medical or safety reasons);

2) High risk: a limited number of artificial intelligence systems defined in the Act are classified as high risk and are in turn included in two categories:

  • Products or safety components of products which according to Annex 1 must undergo a conformity assessment procedure with a third-party before being placed on the market or put into service (which includes machinery, toys, lifts, medical devices or motor vehicles, among others).
  • Several systems listed in Annex III, that include both systems to be used by the public authorities, for example, in the area of social policies or immigration, as well as privately, such as life and health insurance policies, the creditworthiness of natural persons, or in the selection, promotion or dismissal of personnel which have, in general a potentially adverse impact on the safety of persons or their fundamental rights, as protected by the EU Charter of Fundamental Rights.

These systems are subject to intense regulation which affects their providers, but also to varying degrees, other actors that participate in their value chain (authorized representatives, importers, distributors, deployers) so that they cannot avoid responsibility.

They must firstly carry out a conformity assessment with the compulsory requirements of trustworthy AI (i.e. data quality, documentation and traceability, transparency, human supervision, accuracy, cybersecurity and robustness - resilience against errors, faults, inconsistencies). This assessment can be carried out by the provider itself if it complies with harmonized rules or specifications established by the EU, or, by a notified body in accordance with the procedure set out in Annex VII. If the result of the assessment is positive, the conformity of the system with the Act is declared (EU declaration of conformity, which the provider must submit to the supervisory authorities that request it) and it is given the CE marking. High-risk systems are also registered on a public register (except when they are used by the public authorities for law enforcement or migration purposes, in which case access to the register is restricted for obvious reasons).

Quality management systems and of the risks must be applied, even after the products have been marketed.

3) Low risk: other AI systems may, in principle, be developed and used in accordance with legislation, and are subject to a relatively simple duty of information regime and respect for intellectual property rights, copyright and related rights (imposed by article 53). The vast majority of the AI systems that are currently used or will be used in the future fall into this category. In these cases, providers of these systems may voluntarily choose to adhere to voluntary codes of conduct or demonstrate compliance with their transparency obligations and respect for intellectual property rights in other ways under the supervision of the European Commission.

As an exception however, specific transparency obligations (article 50) are imposed on certain low-risk systems where there is a clear danger of confusing or manipulating users (for example through the use of chatbots or deep fake techniques, whereby the content obtained would falsely appear to a person to be authentic or truthful). In these cases, the Act requires, for example, that users be aware that they are interacting with a machine or that the content to which they are exposed has been generated or manipulated artificially.

Finally, in relation to general-purpose AI models the Act bears in mind the systemic risk that may arise from their use, including from large generative AI models. These systems may be used for several tasks that may pose systemic risks if they have high-impact capabilities or an equivalent impact (general-purpose AI models trained with a total computer performance of over 10^25 FLOPS - ChatGPT 4 or GEMINI - involve systemic risks). Given that these powerful models can cause serious accidents or be inappropriately used for large-scale cyber attacks, the Act imposes additional risk evaluation and mitigation, incident reporting and cybersecurity protection obligations (article 55), with the providers of these systems also using good practice codes to demonstrate compliance with these obligations.

Administrative control structure and penalty system

In order to ensure the efficacy of all these regulations, the Act obliges Member States to appoint one or several competent bodies to supervise compliance with the obligations it imposes.

In Spain, the Spanish Artificial Intelligence Oversight Agency has already been created, through Royal Decree 729/2023 of August 22, 2023, which carries out the tasks of inspection, verification and imposition of penalties in accordance with the Act. In short, it is the main national supervisory authority.

At a European level the European AI Office, established by the Commission Decision of January 24, 2024 (OJ-Z-2024-70007), will be the supervisory body and it will have important functions, particularly in relation to the supervision of general-purpose AI models.

The powers the authorities will have to ensure the efficacy of the AI Act include the power to impose penalties. Fines for infringement of the AI Act have been established at a percentage of the infringing company’s annual turnover in the preceding fiscal year or a predetermined amount, if this is higher and can rise to may be up to EUR 35 million or 7% of the infringer’s total worldwide annual turnover for the preceding financial year, if this amount is higher.