On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act (AI Act). It is considered to be the world’s first comprehensive horizontal legal framework for AI. It provides for EU-wide rules on data quality, transparency, human oversight and accountability. With challenging requirements, significant extraterritorial effects, and fines of up to 35 million euros or 7% of global annual revenue (whichever is higher), the AI Act will have a profound impact on a significant number of companies conducting business in the European Union.
The text adopted by the European Parliament is available here.
Background
The European Commission had issued its proposal for an AI Act in April 2021. Subsequent negotiations with the European Parliament and the Council of the European Union eventually resulted in a political agreement in December 2023 (see our analysis here). With the vote of the European Parliament, the legislative process is almost complete. The AI Act will enter into force 20 days after publication in the Official Journal (expected in May or June 2024). Most of its provisions will become applicable two years after the AI Act’s entry into force. However, provisions related to prohibited AI systems will apply after six months, while the provisions regarding generative AI will apply after 12 months.
While some of these may appear to be generous timelines, several categories of affected actors may face significant redesigns of their products and services, which should be initiated as soon as possible. The same applies to non-AI companies, as they will need to understand the technology and set their own risk thresholds to navigate compliance effectively.
What Is AI?
The European Commission’s proposed definition of AI was criticized for being too broad. The final definition is inspired by the OECD definition, which is widely accepted. It focuses on two key characteristics of AI systems: (1) they operate with varying levels of autonomy and (2) they infer from the input they receive how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
Article 3(1) “AI system” means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. |
Recital 12 of the AI Act provides additional background regarding the intentions of the legislators:
[...] it should be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations. A key characteristic of AI systems is their capability to infer. This capability to infer refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms from inputs or data. The techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. The capacity of an AI system to infer transcends basic data processing, enables learning, reasoning or modelling. The term “machine-based” refers to the fact that AI systems run on machines. |
Who Is Concerned?
The AI Act applies to providers of AI systems, i.e., companies that develop AI systems with a view to placing them on the market or putting them into service under their own name or trademark, whether for payment or free of charge. The AI Act also applies to importers and distributors of AI systems in the European Union.
Importantly, the AI Act also applies to “deployers”, which are defined as natural or legal persons using AI under their authority in the course of their professional activities.
Where Does the AI Act Apply?
The AI Act has a significant extraterritorial effect, as it applies to providers who place or put into service AI systems on the EU market, irrespective of where they are established or located. The AI Act also applies to providers or deployers established or located outside the European Union where the output of the system is used in the European Union. The AI Act only applies to deployers, importers and affected individuals who are in the European Union. There is little clarity and precision regarding distributors.
The AI Act does not apply to AI specifically developed and put into service for the sole purpose of scientific research and development. The AI Act does not apply to any research, testing and development activity regarding AI before being placed on the market or put into service — but this exemption does not apply to real-world testing. In addition, the AI Act does not apply to systems released under free and open-source licenses, unless such systems qualify as high-risk, prohibited or generative AI.
What Is the EU Approach to AI Regulation?
The AI Act relies on a risk-based approach, which means that different requirements apply in accordance with the level of risk.
- Unacceptable risk. Certain AI practices are considered to be a clear threat to fundamental rights and are prohibited. The respective list in the AI Act includes AI systems that manipulate human behavior or exploit individuals’ vulnerabilities (e.g., age or disability) with the objective or the effect of distorting their behavior. Other examples of prohibited AI include biometric systems, such as emotion recognition systems in the workplace or real-time categorization of individuals.
- High risk. AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Examples of high-risk AI systems include critical infrastructures, such as energy and transport; medical devices; and systems that determine access to educational institutions or jobs.
- Limited risk. Providers must ensure that AI systems intended to directly interact with natural persons, such as chatbots, are designed and developed in such a way that individuals are informed that they are interacting with an AI system. Typically, deployers of AI systems that generate or manipulate deepfakes must disclose that the content has been artificially generated or manipulated.
- Minimal risk. There are no restrictions to minimal-risk AI systems, such as AI-enabled video games or spam filters. Companies may, however, commit to voluntary codes of conduct.
General-Purpose AI Models/Generative AI
In the course of the negotiations, a chapter on general-purpose AI models was added to the AI Act. The legislation now differentiates between “general-purpose AI models”, a subcategory “general-purpose AI models with systemic risk” and general-purpose AI models with high-impact capabilities.
Relationship With the GDPR
EU law on the protection of personal data, privacy and the confidentiality of communications will apply to the processing of personal data in connection with the AI Act. The AI Act does not affect the GDPR (Regulation 2016/679) or the ePrivacy Directive 2002/58/EC (see Article 2(7)).
For more information on this or other AI matters, please contact one of the authors. The authors would like to thank Antonio Marzano for his assistance in preparing this alert.