Artificial intelligence in the EU: the difficult balance between legal certainty and technological progress
Alejandro Sánchez del Campo, of counsel in the area of Startups & Open Innovation at Garrigues.
THE EU has set itself the target of being the global leader in innovation and the data economy, building ecosystems based on excellence and trustworthiness. In this context, the most recent significant step has been the publication of the white paper on artificial intelligence (AI), which had been announced by the President of the European Commission shortly after she took office.
The second part of the document covers regulatory issues that are necessary to gain that trust. According to the paper, artificial intelligence provides numerous advantages, but it also poses two risks that need to be borne in mind:
- On the one hand, risks relating to fundamental rights, because AI may be biased and lead to discrimination that is difficult to detect or be used for mass surveillance. The European Commission explains in the white paper that the specific characteristics of these technologies, such as opacity ("black box effect"), complexity, unpredictability and partially autonomous behavior, may make it hard to verify compliance with existing legislation. The authorities and affected persons might lack the means to verify how a decision was taken and may therefore face difficulties in securing effective access to justice.
- There may also be risks for safety and the effective functioning of the liability regime. Market surveillance and enforcement authorities may find themselves in a situation in which they are unclear as to whether they can intervene, because they may not be empowered to act in cases in which the risk is not related to the product itself and/or they don’t have the appropriate technical capabilities for the inspection of systems.
The view taken in the white paper is that the current legal system on product safety and liability legislation (principally Directive 85/374/EEC concerning liability for defective products) is, in principle, sufficient to cover a large part of the legal questions that arise, although the Commission feels that it could be modified to make improvements in relation to certain specific situations, such as the following:
- European safety legislation is generally applied to products and not to services based on AI.
- The integration of software during the useful life of products may change how they work. The risks this poses cannot be adequately covered because legislation is basically established when they are placed on the market. This legislation makes a manufacturer liable for the product, but it is not clear who is responsible if the software with AI has been added subsequently by a different entity.
- There exist other risks that are not expressly covered by the legislation, such as cyberattacks, loss of connectivity, etc.
- There is a risk of fragmentation in the single digital market if Member States approve national legislation to minimize these risks.
The subject of liability and safety of disruptive technologies is of great concern to the Commission. In fact, at the same time as the white paper, Brussels has published a Report on the safety and liability implications of artificial intelligence, the Internet of things and robotics, which supplements another similar report from December 2019 on liability for artificial intelligence and other emerging digital technologies. These reports also propose making certain adjustments to existing regulations.
In order to establish the scope of future legislation, it is essential to clarify what we mean by artificial intelligence. The white paper uses the definition that the group of experts included in the recent document Ethics guidelines for trustworthy AI:
“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behavior by analyzing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).”
An important point is the principle that the new regulatory framework must be effective without creating a disproportionate burden, especially for SMEs. To strike this balance and ensure that regulatory intervention is proportionate, the Commission is of the view that it should follow a risk-based approach. According to the white paper, a given AI application should generally be considered high-risk depending on the sector involved - it specifically mentions health, transport, energy and the public sector - and also on whether the intended use involves significant risks, in particular from the viewpoint of the protection of safety, consumer rights and fundamental rights.
In principle, both criteria should be met (sector and significant risks when used) for the requirements mentioned below to be applied, but the Commission introduces other examples of exceptional situations which may also be considered high-risk, such as using IA for recruitment and remote biometric identification. Indeed, in relation to facial recognition in public places, the Commission refers to the Data Protection Regulation and the Charter of Fundamental Rights when stating that AI can only be used for identification purposes where such use is justified, proportionate and subject to adequate safeguards.
AI applications that are considered high-risk have to meet requirements related to the information that feeds the algorithms, the storage of data and records, the information to be provided to users of these systems (including the fact that they are interacting with AI systems and not humans), technical robustness, and the possibility, at all times, of humans controlling and even deactivating the system.
In relation to the parties to whom the legal requirements apply, the Commission considers that each obligation should be incumbent on the actor best placed to meet it depending on their degree of control over the technology, and that the requirements must be met by all economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not.
The white paper ends with another relevant proposal. The Commission appears to be in favor of a certification system since it mentions that it is necessary to carry out a prior objective conformity assessment to ensure that high-risk applications comply with the above requirements.
We will see how the above proposals are implemented. For the time being, the document is open for comments until May 19, 2020.