This blog post introduces the universe of European and national authorities and other relevant actors in the area of supervision and enforcement of the European Union’s Artificial Intelligence Act (AI Act) and provides a brief overview of possible penalties under the AI Act.
Authorities and Other Relevant Actors at EU Level
- AI Office. At EU level, the AI Act creates the so-called AI Office within the European Commission (Commission). The AI Office will have the following tasks.
- Development of Compliance Tools. The AI Office is generally responsible for developing compliance tools such as model terms for contracts between providers of high-risk AI systems and third parties providing elements used for or integrated into those systems, templates for deployers’ fundamental right impact assessments of high-risk AI systems, and providers’ summaries of the content used for training of general-purpose AI models. The AI Office should also encourage and facilitate the drawing up of codes of practice at EU level to contribute to the proper application of the AI Act’s requirements regarding general-purpose AI models with systemic risk and to facilitate the effective implementation of the obligations regarding the detection and labeling of artificially generated or manipulated content.
- Information About General-Purpose AI Models. The AI Office may require providers of general-purpose AI models to provide the AI Office with all the information and documentation necessary to demonstrate compliance with the AI Act. For example, the AI Office may also ask providers of general-purpose AI models to provide the technical documentation of the model or their authorized representative’s mandate.
- Reporting Obligations of Providers of General-Purpose AI Models with Systemic Risk. Providers of such models must report without undue delay to the AI Office relevant information about serious incidents and possible corrective measures to address them.
- Regulatory Sandboxes. National competent authorities must inform the AI Office of the establishment of a regulatory sandbox and its progression and may ask for the AI Office’s support and guidance. Specifically, national authorities must inform the AI Office if they suspend the testing process or participation in a regulatory sandbox. The AI Office must make publicly available a list of planned and existing sandboxes and keep it up to date to encourage more interaction in the AI regulatory sandboxes and cross-border cooperation (see our blog post for more detail on regulatory sandboxes in the AI Act).
- Support for Small and Medium-Sized Enterprises (SMEs) and Startups. The AI Office must provide standardized templates for areas covered by the AI Act to help SMEs and startups comply with the regulation.
- European AI Board. The AI Act also creates a European AI Board (Board) to advise the Commission and EU Member States on the consistent application of the AI Act.
- Composition. The Board is composed of one representative per EU Member State. The AI Office and the European Data Protection Supervisor, which enforces the AI Act vis-à-vis the EU institutions, attend the Board’s meetings without taking part in the votes. Other authorities, bodies or experts may be invited to the Board’s meetings on a case-by-case basis.
- Tasks. The Board’s primary mission is to advise and assist the Commission and EU Member States to facilitate the consistent and effective application of the AI Act. To that end, the Board has various tasks, such as contributing to the coordination between national authorities and harmonization of national administrative practices; facilitating a common understanding of the AI Act’s concepts; assisting national competent authorities and the Commission in developing the organizational and technical expertise required for the implementation of the AI Act; and issuing recommendations and opinions on any relevant matters related to the implementation of the AI Act and to its consistent and effective application.
- Advisory Forum. An advisory forum will provide technical expertise and advise the Board and the Commission.
- Composition. The membership of the advisory forum must represent a balanced selection of stakeholders, including industry, startups, SMEs, civil society and academia. Membership must also be balanced regarding commercial interests (and, in that category, SMEs and larger companies) and noncommercial interests. Several European agencies will be permanent members of the forum. Experts and other stakeholders may be invited.
- Tasks. The advisory forum will meet at least twice a year and prepare opinions, recommendations and contributions upon request of the Board or the Commission. The forum will publish annual reports of its activities.
- Scientific Panel of Independent Experts. A scientific panel of independent experts will support the AI Office.
- Composition. The panel will consist of experts selected by the Commission based on up-to-date scientific or technical expertise in AI necessary for the panel’s tasks. Experts must be independent from providers of AI systems or general-purpose AI models and systems. The Commission, in coordination with the Board, will determine the number of experts and ensure fair gender and geographical representation.
- Tasks. The scientific panel will advise and support the AI Office in its tasks. To that end, the panel will have various tasks, such as alerting the AI Office of possible systemic risks at EU level of general-purpose AI models or contributing to the development of tools and methodologies for evaluating capabilities of general-purpose AI models and systems. In addition, EU Member States may ask the panel to support their enforcement activities.
Authorities and Other Relevant Actors at National Level
- Market Surveillance Authorities. Each EU Member State must establish or designate at least one market surveillance authority responsible for ensuring compliance with the AI Act. These authorities will be primarily responsible for enforcing the AI Act.
- Notifying Authorities. Each EU Member State must establish or designate at least one notifying authority responsible for designating and monitoring conformity assessment bodies. Such bodies may become notified bodies responsible for the performance of third-party conformity assessment activities, including testing, certification and inspection (see our blog post on standardization in the AI Act for more detail).
- Guidance and Advice. Market surveillance authorities may provide guidance and advice on the implementation of the AI Act, in particular to SMEs, including startups. In so doing, national competent authorities must take into account the guidance and advice of the Board, the Commission or any other relevant authority.
- Sufficient Resources. EU Member States ensure that market surveillance authorities are provided with adequate technical, financial and human resources, and with infrastructure to fulfill their tasks. EU Member States will need to report to the Commission on the status of their authorities’ resources mid-2025 and once every two years thereafter. The Commission will share this report with the Board for discussion and possible recommendations.
Penalties
Under the AI Act, EU Member States must lay down the rules on penalties applicable to infringements of the AI Act and take all measures necessary to ensure that they are implemented.
Penalties must be effective, proportionate and dissuasive, and must take into account:
- the nature, gravity and duration of the infringement and its consequences;
- whether other market surveillance authorities have already fined the operator for the same infringement; and
- the size and market share of the operator committing the infringement.
Administrative fines will be calculated based on an undertaking’s global revenue if the resulting amount exceeds the fine cap.
- noncompliance with the AI Act’s requirements regarding prohibited AI systems is subject to administrative fines of up to €35 million or up to 7% of revenue;
- noncompliance with the AI Act’s requirements regarding limited and high-risk AI and general-purpose AI models is subject to administrative fines of up to €15 million or up to 3% of revenue; and
- supplying incorrect, incomplete or misleading information when addressing authorities’ request is subject to fines of up to €7.5 million or 1% of revenue.
In previous blog posts, we discussed the AI Act’s risk-based approach and provided details about prohibited and limited-risk AI systems. We also discussed the requirements and stakeholders’ obligations associated with high-risk AI systems, what companies need to know if they just use AI, how the AI Act regulates generative AI, how it aims to support innovation, and the key role that standardization plays in compliance.
For more information on this or other AI matters, please contact one of the authors.