Navigating Generative AI Under the European Union’s Artificial Intelligence Act

Navigating Generative AI Under the European Union’s Artificial Intelligence Act

Blog WilmerHale Privacy and Cybersecurity Law

This blog post focuses on how the EU’s Artificial Intelligence Act (“AI Act”) regulates generative AI, which the AI Act refers to as General-Purpose AI (“GPAI”) Models.

As explained in our previous blog post, the AI Act generally relies on a risk-based approach. This means that different requirements apply depending on the level of risk. GPAI models, however, are a separate category and are subject to specific requirements. These requirements were not part of the European Commission (“Commission”) proposal in April 2021 but were inserted in the course of the legislative process due to the popularity of generative AI tools since 2022.

The obligations for providers of GPAI models will apply from August 2, 2025. Transparency obligations for AI-generated content, deepfakes, and AI-generated or manipulated text will apply from August 2, 2026.

What Is a GPAI Model?

Most of the provisions of the AI Act deal with “AI systems.” AI models are a component of an AI system and are the engines that drive the functionality of AI systems. AI models require the addition of further components, such as a user interface, to become AI systems.

While the AI Act generally does not subject AI models to legal obligations, it defines “GPAI model” as an AI model that (1) displays significant generality; (2) is capable of competently performing a wide range of tasks; and (3) can be integrated into a variety of downstream systems or applications.

AI models used for research, development, or prototyping activities before market release are not covered under the AI Act.

Obligations of “Providers” of GPAI Models

The AI Act includes several obligations of providers of GPAI models, i.e., companies that develop such models with a view to placing them on the EU market or putting them into service in the EU under their own name or trademark, whether for payment or free of charge.

Providers of GPAI models are required to comply with the following obligations.

  • Technical Documentation for Authorities. Providers must draft and keep up to date the technical documentation of the model, including its training and testing process and the results of its evaluation. Providers must share this information with the Commission’s AI Office and the national competent authorities upon request.
    • General Description. The technical documentation must include a general description of the GPAI model, including the tasks that the model is intended to perform and the type and nature of AI systems in which it can be integrated; the acceptable use policies; the date of release and methods of distribution; the architecture and number of parameters; the modality (e.g., text, image) and format of inputs and outputs; and the license.
    • Specific Description. The technical documentation must also include a detailed description of the elements of the GPAI model and relevant information on the process for its development, including the technical means required for the GPAI model to be integrated in AI systems; the design specifications of the model and training process; information on the data used for training, testing, and validation; the computation resources used to train the model; and known or estimated energy consumption by the model.
    • Changes and Specifications. The Commission may amend and specify the information that needs to be provided in the technical documentation.
  • Documentation for Downstream Providers of AI Systems. Providers must draft, keep up to date, and make available information and documentation on the capabilities and limitations of the AI model to supply to downstream providers. Such information must be broadly similar to the information mentioned above. Deployers, i.e., companies that use AI under their authority in the course of their professional activities, must take appropriate technical and organizational measures to ensure they use high-risk AI systems that integrate GPAI systems in accordance with the downstream provider’s instructions for use (see our previous blog posts here and here).
  • Copyright. Providers must establish a policy to comply with EU law on copyright and related rights, including the EU’s Copyright Directive.
  • Information About Content Used for Training Purposes. Providers must draft and publish a sufficiently detailed summary about the content used for training their AI model, according to a template provided by the AI Office.
  • Cooperation. Providers must cooperate as necessary with the Commission and the national competent authorities.
  • EU Representative. A provider must appoint a representative within the EU if it does not have an establishment there.
    • The representative must be appointed by written mandate before placing the GPAI model on the EU market.
    • The representative will manage the technical documentation relevant to its AI model and provide the AI Office and national competent authorities, upon a reasoned request, with all the information and documentation necessary to demonstrate the provider’s compliance with its obligations. The representative can also be addressed, in addition to or instead of the provider, by the AI Office or the national competent authorities, on all issues related to ensuring compliance with the AI Act.

Obligations of Providers of “Free and Open-License GPAI Models”

Free and open-license AI models only have to comply with the copyright and training requirements mentioned above. This exception does not apply if the AI model bears a systemic risk (see below).

Obligations for Providers of “GPAI Models with Systemic Risk”

A GPAI model bears systemic risk if the provider or the Commission determines that it has high-impact capabilities – having a significant impact on the EU market due to the model’s reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole – that can be propagated at scale across the value chain.

  • A GPAI model is presumed to have high-impact capabilities where the cumulative amount of computation used for training a GPAI model is greater than 10^25 FLOPs. The Commission may amend this threshold and supplement benchmarks and indicators for this threshold to reflect the state of the art. Providers must notify the Commission without delay and in any event within two weeks of discovery. Providers may present arguments that despite reaching the AI Act’s threshold, their models do not present systemic risks due to their specific characteristics.
  • The Commission may also consider that a GPAI model has high-impact capabilities, taking into account various criteria, including the number of parameters of the model; the quality or size of the data set; the amount of computation used for training the model; the input and output modalities of the model; the benchmarks and evaluations of capabilities of the model; wither the model has a high impact on the EU internal market due to its reach; and the number of registered end users. The Commission may amend these criteria.

In addition to the obligations mentioned above, providers of GPAI models with systemic risk are also subject to the following requirements.

  • Model Evaluations. Providers must perform model evaluations in accordance with standardized protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks.
  • Risk Mitigation. Providers must assess and mitigate possible systemic risks at the EU level.
  • Incident Reporting. Providers must track, document, and report serious incidents and possible corrective measures to the AI Office and relevant national authorities.
  • Cybersecurity. Providers must ensure an adequate level of cybersecurity protections for the model and its physical infrastructure.

Codes of Practice

The AI Office will encourage and facilitate the drawing up of codes of practice at the EU level by May 2025. If a code of practice cannot be either finalized by August 2025 or is deemed inadequate by the AI Office, the Commission may provide common rules for the implementation of providers’ obligations.

  • Drafting. The AI Office may invite providers of GPAI models to participate in the drawing-up of codes of practice. Other relevant stakeholders (e.g., civil society, industry, academia) may support the process.
  • Monitoring. The AI Office will ensure that participants in the codes of practice report regularly to the AI Office on the implementation of the commitments and the measures taken and their outcomes. The AI Office and the EU AI Board – the umbrella body that brings together, among others, the national competent authorities and the AI Office – will also regularly monitor and evaluate the achievement of the objectives of the codes of practice. The AI Office may invite all providers of GPAI models to adhere to the codes of practice. The Commission may approve a code of practice and give it general validity within the EU.
  • Tools for Compliance. Providers of GPAI models (with systemic risk) may rely on codes of practice to demonstrate compliance with their obligations until a harmonized standard is published. Compliance with European harmonized standards grants providers the presumption of conformity. Providers that do not adhere to an approved code of practice or do not comply with a European harmonized standard will need to demonstrate alternative adequate means of compliance for assessment by the Commission.

General Obligations Regarding AI-Generated Content

In addition to imposing the above requirements for providers of GPAI models, the AI Act also imposes transparency requirements for AI-generated content (see our blog post for more details).

  • Obligations of Providers
    • AI-Generated Content. Providers of AI systems generating synthetic audio, image, video, or text content must ensure that their systems’ outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. This does not apply to AI systems that perform an assistive function for standard editing or that do not substantially alter either the input data or the semantics provided by deployers.
    • Chatbots. Providers must ensure that AI systems intended to directly interact with natural persons, such as chatbots, are designed and developed in such a way that individuals are informed that they are interacting with an AI system. This requirement does not apply where this is obvious for reasonably well-informed, observant, and circumspect individuals, taking into account the circumstances and the context of use.
  • Obligations of Deployers. There are specific obligations for deployers.
    • Deepfakes. Businesses using deepfakes in the course of a professional activity must disclose that the content has been artificially generated or manipulated.
    • Text. Deployers of AI systems that generate or manipulate text published to inform the public on matters of public interest must disclose that the text has been artificially generated or manipulated. This does not apply where the text has undergone a process of human review or editorial control, and where a natural or legal person holds editorial responsibility for the publication of the content.

Enforcement and Fines

The Commission’s AI Office will be responsible for enforcing the AI Act’s provisions for providers of GPAI models. However, national competent authorities remain competent vis-à-vis providers and deployers.

  • Providers of GPAI Models. The AI Office may impose on providers of GPAI models administrative fines of up to €15 million or up to 3% of their total worldwide annual revenue for the preceding financial year, whichever is higher, if they intentionally or negligently infringed the obligations listed above; fail to comply with an AI Office’s request for a document or for information, or supply incorrect, incomplete, or misleading information; fail to comply with the measures requested by the AI Office; or fail to make available to the AI Office access to the GPAI model (with systemic risk) for the purposes of its evaluation.
  • Downstream Providers and Deployers of GPAI Models. National competent authorities remain responsible for ensuring compliance of downstream providers and deployers of GPAI models with the transparency requirements mentioned above. Noncompliance with these requirements is subject to administrative fines of up to €15 million or up to 3% of the operator’s total worldwide annual revenue for the preceding financial year, whichever is higher.

For more information on this or other AI matters, please contact one of the authors.

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.