The European Commission continues unpacking the Artificial Intelligence Act: definition of artificial intelligence system and the general-purpose AI code of practice

On February 6, the European Commission published guidelines to assist the various operators in the artificial intelligence environment to determine whether they are dealing with an artificial intelligence system within the meaning of Regulation (EU) 2024/1689 on artificial intelligence. Additionally, on March 11, it published the third draft of the general-purpose AI code of practice. In the following article, we unpack the key points of both documents.
On February 6, as set out in article 96.1. f) of the AI Act, the European Commission published guidelines on the definition of an artificial intelligence system, which join those published 2 days earlier on prohibited artificial intelligence practices, which we looked at in an earlier post.
These new guidelines aim to try to assist the various operators to identify whether they are dealing with an artificial intelligence system to which the AI Act applies.
The guidelines attempt to further explain the definition of “artificial intelligence system” contained in article 3 (1) of the AI Act. As explained in this article, and is clarified further in the guidelines, to identify an artificial intelligence system, it must meet all these requirements:
- Machine-based (hardware and software).
- Designed to operate with varying levels of autonomy, meaning that it has some degree of independence of actions from human involvement and of capabilities to operate without human intervention.
- It may exhibit adaptiveness after deployment, an element that is not an essential requirement to fall within the scope of the AI Act. In other words, a system may be classed as an artificial intelligence system for the purposes of the AI Act even if it does not have the capacity to adapt after deployment.
- It has explicit or implicit objectives, i.e., both clearly stated objectives that are directly encoded by the system developer (e.g., cost optimization in a function) and objectives that are not explicitly stated, but can be deduced from the underlying behavior or assumptions of the system.
- It must be able to infer from the input information it receives how to generate output results, meaning it does not rely on rules predefined by humans to automatically execute operations. The guidelines provide a few examples that would not meet this requirement and, therefore, would not be classed as an artificial intelligence system for the purposes of the AI Act: database management systems used to filter or sort data based on certain criteria, or systems intended for purely descriptive analysis such as software that uses statistical methods on survey data.
- The system outputs may include predictions, content, recommendations, or decisions, among others. The AI Act uses the expression "such as", which means that other outputs may be obtained.
- The output may be able to influence physical or virtual environments. It is stated in the act itself that this influence is not essential for a system to qualify as an AI system.
While these guidelines have not yet been formally approved and are not binding, they can assist with interpreting a few of the many undefined terms contained in article 3 of the AI Act. This may be particularly useful from the standpoint that a large part of the business community is working against the clock to review and classify the artificial intelligence systems they use, develop or place on the market.
General-purpose AI
The other document published by the European Commission that we are looking at is the third draft of the General-Purpose AI Code of Practice, prepared by independent experts. This document is essential for details of the obligations under the AI Act, and gives providers clear guidelines to ensure compliance and encourage the development of safe and reliable AI models.
This third draft features a more simplified and accurate structure compared to previous versions. It focuses on a number of high-level commitments, accompanied by detailed measures for their effective implementation. Its key elements include:
- Transparency and copyright: All providers of general-purpose AI models must comply with specific obligations regarding transparency and copyright enforcement. To facilitate this process, a model documentation form has been included that allows the required information to be collected and presented consistently and accessibly.
- Systemic risk assessment and mitigation: for providers of AI models that may pose systemic risks (as defined in the AI Act), the code sets out specific measures. These include conducting thorough evaluations of models, implementing risk mitigation strategies, mandatory reporting of major incidents, and complying with strict cybersecurity standards.
The creation of this code was a collaborative effort, coordinated by the European AI Office, with the active participation of close to 1,000 stakeholders, including AI model providers, intermediaries, industry representatives, civil society organizations academics and independent experts. This diversity ensures that the code reflects a wide range of perspectives and expertise.
For legal professionals specializing in the digital world, this code is an essential tool. It provides a detailed framework on responsibilities and best practices for providers of general-purpose AI models, facilitating interpretation and application of the AI Act. Additionally, it promotes the adoption of practices that balance technological innovation with the protection of users' fundamental rights and safety.
The final code is expected to be ready in May 2025, to give providers clear guidance for evidencing compliance with the AI Regulation before it is fully applicable in August 2025. The final version will include contributions received during this last consultation phase, to ensure that the guidelines are practical and adapted to the needs of the sector.
Next steps
The legal and technical complexity of this new standard requires an analysis and adaptation effort for everyone involved in technology development and regulatory compliance tasks. This is shown by the fact that the European Commission itself is publishing materials to assist with interpretation and application of the act.
Contact
