Standardization for Compliance in the European Union’s AI Act

Standardization for Compliance in the European Union’s AI Act

Blog WilmerHale Privacy and Cybersecurity Law

This blog post discusses harmonized standards, common specifications, and certificates in the EU’s Artificial Intelligence Act (“AI Act”). We also discuss the role of the authorities responsible for appointing conformity assessment bodies and how the latter must perform their activities.

Standardization is expected to play a key role in providing technical solutions to ensure compliance with the AI Act given the complexity of the Act’s requirements and the technology. Many stakeholders are therefore closely following and sometimes involved in the development of standards in the field. This is, however, no easy task, and it will require significant efforts to have standards ready for use by August 2026, when most of the AI Act’s provisions will come into effect (see our blog post on critical milestones on the road to full applicability of the AI Act for more detail). The European Committee for Standardization and the European Electrotechnical Committee for Standardization are working to make the standards available by the end of 2025.

Standards and Specifications

  • Harmonized Standards. AI systems classified as high-risk and complying with harmonized standards published in the Official Journal of the EU will be presumed to meet the requirements outlined in the AI Act. Such standards will also cover the general transparency requirements under Article 50 of the AI Act (see our blog post for more details on these requirements). To that end, the European Commission (the “Commission”) has issued standardization requests to European standardization organizations.
  • Common Specifications. The Commission is empowered to adopt common specifications for high-risk requirements and limited-risk transparency requirements (Article 41). Harmonized standards, however, take priority. Thus, the Commission may adopt common specifications only if no such standards have been adopted yet, or the Commission requested the adoption of such standards but the request has been rejected or the standards are not delivered in time, are insufficient, or are not compliant with the request. In the same vein, the Commission must repeal common specifications that cover the same requirements.
    • High-risk AI systems that are in conformity with common specifications are presumed to be in conformity with the AI Act’s requirements covered by those specifications.
    • If they are not in conformity, the providers of such systems – i.e., companies that develop AI systems under their own name or trademark and market them – must show that they have adopted technical solutions that meet a level at least equivalent to those specifications.

Certificates

  • Conformity Assessment Procedure. Providers must ensure that high-risk AI systems undergo a conformity assessment procedure before placing them on the European market or putting them into service in the EU. Exceptions to this requirement only apply under very strict conditions subject to national market surveillance authorities’ and the Commission’s review, for a limited period of time, and for exceptional reasons of public security or the protection of life, health, the environment, or key industrial and infrastructural assets (Article 46).

The conformity assessment procedure to be used varies depending on the type of high-risk system and can consist of an internal control or an external control by a Notified Body (“NB”) under Article 43 (see below and our blog post for more detail).

  • Certificate Issuance and CE Marking. If an NB carries out the conformity procedure and determines that the high-risk AI system in question complies with the AI Act requirements (see our blog post for more detail), the NB issues an EU technical documentation assessment certificate. The AI Act provides details regarding the required content of such certificates (Annex 7) and their period of validity (Article 44). High-risk AI systems should bear the CE marking to indicate their conformity with the AI Act.
  • Sanctions in Relation to Certificates. NBs can refuse to issue a certificate if they consider that the system is not compliant. They can also suspend or withdraw the certificate, or impose restrictions, if the system no longer meets the AI Act requirements. Providers can avoid such decisions by taking appropriate corrective action within the deadline set by the NB. Providers should also be able to appeal NBs’ decisions.

Notifying Authorities

The AI Act requires EU Member States to designate or establish at least one Notifying Authority (“NA”) responsible for setting up and carrying out the necessary procedures for the assessment, designation, and notification of conformity assessment bodies (“CABs”) and for their monitoring. EU Member States may decide that such assessment and monitoring may be carried out by national accreditation bodies.

Notified Bodies

  • Role. NBs must verify the conformity of high-risk AI systems in accordance with the conformity assessment procedure described below (Article 43).
  • From CABs to NBs. NBs are notified CABs, i.e., bodies that perform third-party conformity assessment activities, including testing, certification, and inspection. To qualify as NBs, CABs must submit a detailed application to the NA of the EU Member State in which they are established. The NA must in turn notify the Commission and the other Member States. CABs may perform the activities of an NB only where neither the Commission nor other EU Member States have raised objections within two weeks to two months of the NA’s notification, depending on the documentary evidence submitted by the CAB. If objections are raised, the Commission must enter into consultations with the relevant EU Member States and the CAB and decide whether the CAB can qualify as an NB.
  • Conditions. NAs may only notify CABs that satisfy the requirements listed in the AI Act (Article 31). The primary goal of these requirements is to ensure that CABs are equipped to conduct independent, objective, impartial, and confidential assessments of high-risk AI systems.
  • CABs established under the law of a non-EU country may only carry out NBs’ activities if (i) they meet the requirements listed in Article 31 or ensure an equivalent level of compliance and (ii) the EU has concluded an agreement with the country in question.

More About the AI Act

In previous blog posts, we discussed the AI Act’s risk-based approach and provided details about prohibited and limited-risk AI systems. We also discussed the requirements and stakeholder obligations associated with high-risk AI systems, what companies need to know if they just want to use AI, how the AI Act regulates generative AI, and how it aims to support innovation.

 

For more information on this or other AI matters, please contact one of the authors.

Authors

More from this series

Notice

Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.