This blog post was republished by the Global Regulatory Developments Journal.
In this blog post, we will focus on the identification of “high-risk AI systems” under the Artificial Intelligence Act (“AI Act”) and the requirements applying to such systems.
As explained in our previous blog posts, the AI Act’s overall risk-based approach means that, depending on the level of risk, different requirements apply. In total, there are four levels of risk:
(1) unacceptable, in which case AI systems are prohibited (see our blog post);
(2) high risk, in which case AI systems are subject to extensive requirements, including regarding transparency;
(3) limited risk, which triggers only transparency requirements (see our blog post); and
(4) minimal risk, which does not trigger any obligations.
Identifying High-Risk AI Systems
Article 6 of the AI Act describes the thresholds that lead to an AI system being “high risk.” Either such system meets the criteria in Article 6(1) of the AI Act or it falls into a category referred to in Article 6(2) of the AI Act.
Article 6(1) of the AI Act. An AI system will be considered high risk if two cumulative conditions are fulfilled:
1. The AI system is intended to be used as a safety component of a product (or is a product) covered by specific EU harmonization legislation listed in Annex I of the AI Act. This list contains more than 30 directives and regulations, including legislation regarding the safety of toys, vehicles, civil aviation, lifts, radio equipment and medical devices; and
2. The same harmonization legislation mandates that the product that incorporates the AI system as a safety component, or the AI system itself as a stand-alone product, undergo a third-party conformity assessment before being placed on the EU market or put into service within the EU.
Article 6(2) of the AI Act—Specific List. In addition, the AI Act contains, in its Annex III, a list of AI systems that must be considered high risk. This list currently contains AI systems in eight different categories. Examples include, subject to specific conditions and exemptions, biometrics, critical infrastructures, education and vocational training, employment, worker management, and access to self-employment. The European Commission (Commission) has the power to amend this list.
The AI systems identified in Annex III will not be considered high risk if they do not pose a significant risk of harm to individuals’ health, safety or fundamental rights, including by not materially influencing the outcome of decision-making. This exemption applies where one of the following conditions is met:
- the AI system is intended to perform a narrow procedural task;
- the AI system is intended to improve the result of a previously completed human activity;
- the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
- the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases that are listed as high risk.
However, the exemption never applies if the AI system performs profiling of natural persons. Profiling is defined by reference to Article 4(4) of the General Data Protection Regulation as any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements.
If a provider considers that an AI system benefits from the exemption, it must document its assessment before placing that system or putting it into service in the European Union. The provider must also register the system in an EU database for high-risk AI systems set up and maintained by the Commission.
The Commission will provide guidelines no later than 18 months from the date of entry into force of the AI Act to specify the practical implementation of classification rules for high-risk AI systems, including the conditions for exceptions.
Requirements for High-Risk AI Systems
High-risk AI systems must comply with a significant number of requirements considering their intended purposes, the generally acknowledged state of the art and the risk management system put in place. The applicable requirements are as follows:
- Risk management system. High-risk AI systems require a risk management system running throughout the entire life cycle of the system. The objective is to identify foreseeable risks to health, safety or fundamental rights when the system is used in accordance with its intended purpose and to adopt appropriate and targeted measures to address those risks; to estimate and evaluate the risks that may emerge when the system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse; and to evaluate other risks possibly arising based on a post-market monitoring analysis. Importantly, the risk management system concerns only risks that may be reasonably mitigated or eliminated through the development or design of the high-risk AI system or the provision of adequate technical information.
- Data and data governance. The training, validation and testing data used to develop high-risk AI systems must be subject to appropriate data governance and management practices appropriate for the intended purpose of the system.
- Examples include relevant design choices; appropriate data collection processes; relevant data preparation processing operations, such as annotation, labeling, cleaning, updating, enrichment and aggregation; the formulation of relevant assumptions; prior assessment of the availability, quantity and suitability of the datasets needed; examination in view of possible biases likely to affect individuals’ health and safety, negatively impact fundamental rights, or lead to discrimination prohibited under EU law; appropriate measures to detect, prevent and mitigate those biases; and identification of relevant data gaps or shortcomings that prevent compliance, and how they can be addressed.
- Training, validation and testing datasets must be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose. They must have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the datasets may be met at the level of individual datasets or at the level of combinations of datasets. In addition, datasets must consider, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, contextual, behavioral or functional setting within which the AI system is intended to be used.
- For AI systems that are not developed based on AI model training, those requirements apply only to the testing datasets.
- Technical documentation. Technical documentation for high-risk AI systems must be drawn up before placing the system or putting it into service in the European Union. Such documentation must demonstrate that the system complies with the requirements set out in the AI Act.
- The AI Act provides a list of the minimum information that the technical documentation must include, such as a description of the system, its elements and the process for its development; information about the monitoring, functioning and control of the system; a description of the appropriateness of the performance metrics for the system; a description of the risk management system; the relevant changes made by the provider through the life cycle of the system; the technical standards applied; the declaration of conformity; and the system in place to evaluate the system performance.
- SMEs, including startups, may provide the elements of the technical documentation in a simplified manner. The Commission will publish a simplified form to that end.
- Recordkeeping. High-risk AI systems must allow for the automatic recording of events (logs) over their lifetime. The objective is to ensure the traceability of the functioning of the system that is appropriate to its intended purpose. To that end, logging capabilities must enable the recording of events relevant for identifying situations that may result in the system presenting a substantial modification or that have the potential to adversely affect individuals’ health, safety or fundamental rights to a degree that goes beyond that considered reasonable and acceptable in relation to its intended purpose, or under normal or reasonably foreseeable conditions of use; facilitating post-market monitoring; and monitoring the operation of the systems deployed by financial institutions.
- Transparency and provision of information to deployers. Deployers must be provided with sufficiently transparent information to interpret the system’s output and use it appropriately. The system must be accompanied by instructions for use in an appropriate format that include concise, correct and clear information that is relevant, accessible and comprehensible. The instructions for use must contain at least the following information: the providers’ identity and contact details; the system characteristics, capabilities and limitations of performance; changes to the system and its performance; human oversight measures; the computational and hardware resources needed; and, where relevant, the mechanisms included within the system that allows users to properly collect, store and interpret the logs.
- Human oversight. High-risk AI systems must be designed and developed in such a way that they can be effectively overseen by humans. Human oversight must aim at preventing or minimizing the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. The oversight measures must be commensurate to the risks, level of autonomy and context of use.
- Human oversight must be achieved through at least one of the following types of measures:
- measures identified and built, when technically feasible, into the system by the provider before it is placed on the EU’s market or put into service in the EU; or
- measures identified by the provider before placing the system on the market or putting it into service in the European Union and that are appropriate to be implemented by the deployer.
- Individuals to whom oversight is assigned must be able, as appropriate and proportionate to the circumstances, to:
- properly understand the relevant capacities and limitations of the system and monitor its operations, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;
- remain aware of the possible tendency of automatically relying or over-relying on the output produced by the system;
- correctly interpret the system’s output;
- decide not to use the system or otherwise disregard, override or reverse the system’s output; and
- intervene in the operation of the system or interrupt it through a “stop” button or a similar procedure that allows the system to come to a halt in a safe state.
- Human oversight must be achieved through at least one of the following types of measures:
- Accuracy, robustness and cybersecurity. High-risk AI systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their life cycle. The Commission will encourage the development of benchmarks and measurement methodologies to that effect.
- The levels of accuracy and the relevant accuracy metrics must be declared in the instructions of use.
- High-risk systems must be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which they operates.
- High-risk AI systems that continue to learn after being placed on the market or put into service must be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations, and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures.
- High-risk AI systems must be resilient against attempts by unauthorized third parties to alter their use, outputs or performance by exploiting system vulnerabilities, and the technical solutions aiming to ensure the cybersecurity of high-risk AI systems must be appropriate to the risks and circumstances.