top of page

EU AI Act: Articles 16-23, Explained in Memes

I’ve spent the past month diving into the EU AI Act to understand what compliance might look like by 2026 for industries deploying AI solutions at full throttle. This post covers what I’ve found and offers an accessible breakdown—not a legal interpretation (sorry, lawyer memes aside). I’ll walk through each major role defined in the Act and what compliance means for each, with examples to illustrate.


Overview of Roles in the EU AI Act


The EU AI Act defines roles along the AI supply chain, from those who build the systems to those who deploy them. Understanding these roles is crucial because each role comes with unique compliance responsibilities.


Key Roles and Their Responsibilities


Providers : Simply put, they are fully on the hook for all requirements. Providers are fully responsible for meeting compliance standards. They must fulfill requirements like technical documentation, risk management, and regulatory reporting throughout the AI system’s lifecycle


Example: OpenAI, as the original developer of a high-risk AI model, must ensure the model meets all EU requirements.


Authorized Representatives: They represent the provider, if the provider is not in the EU but wants to operate in the EU. These representatives act as intermediaries for providers based outside the EU. Their responsibilities include coordinating compliance and serving as the EU point of contact for authorities.


Example: OpenAI establishes OpenAI Ireland as its authorized EU representative or contracts this role out to an external entity.


Deployers: Deployers primarily execute provider guidelines but are still responsible for human oversight, risk management, and maintaining records. They must adapt monitoring practices to the specific environments where AI is used. This is challenging, as deployers can often lack technical expertise in AI safety. This operational aspect of risk management is important, as they can’t rely solely on provider instructions but must apply appropriate oversight in their context.


This is a minefield in my opinion. How can a business meaningfully provide safety oversight, when their core business has little to do with tech safety? Hire external consultants or build out an internal function? Irrespective of in-house/outsourced, this will be a significant cost for AI deployers.


Example: A hospital using SAP’s enterprise system, which includes AI-based updates, must follow SAP’s guidelines, monitor system performance, and maintain logs.


Importers: Importers must verify that AI systems meet EU standards.

They confirm compliance documentation, halt imports if issues arise, and notify authorities if serious risks are identified. This aspect will be interesting to follow, especially given other contractual obligations that an importer may have in working with an AI


Example: A French company importing a Japanese AI-powered car navigation system for the EU market is responsible for verifying compliance


What Happens if They Don’t Comply?


While there is limited clarity on exact non-compliance costs by role, it’s reasonable to infer that compliance tasks with lower effort—such as maintaining logs or obtaining CE markings—are easier to achieve. Non-compliance costs scale based on the risk and severity of the violation, but its hard to imagine businesses being non compliant where the effort is low, and the ask is clear. Assessing non compliance is typically a lengthy affair with EU regulatory bodies. Case in point - determination of X's status under DSA.


EU AI Act Compliance Deadlines and Fines


The EU AI Act introduces fines for non-compliance, with full compliance deadlines by August 2, 2026.


  • General Non-Compliance (applies to all roles): Up to €15 million or 3% of total worldwide annual turnover (whichever is higher).

  • Severe Violations (e.g., prohibited AI practices): Up to €35 million or 7% of total worldwide annual turnover (whichever is higher).


Its provisions will phase in as follows:


  • February 2, 2025: Prohibitions on certain AI systems begin.

  • August 2, 2025: Compliance obligations for General-Purpose AI Models start for new models on the market.

  • August 2, 2026: All remaining provisions, including requirements for high-risk AI systems, become fully applicable.


I’ve outlined the responsibilities for each role and provided an estimated cost scale for full compliance in the grid below.

 

The Complexity of Enforcement: When Deployers and Others Become “Providers”


Any distributor, importer, deployer, or third party will be treated as a provider of a high-risk AI system (and must follow provider obligations under Article 16) if they:


  1. Put their name or trademark on an existing high-risk AI system, regardless of contracts assigning obligations elsewhere.

  2. Make a major modification to an existing high-risk AI system that keeps it classified as high-risk.

  3. Change the purpose of an AI system (including general-purpose ones) already on the market in a way that makes it high-risk


    This is especially relevant for “incidental deployers”—companies that adopted a general-purpose AI model into their products in 2023/2024 without intending to become AI providers. These companies may now need to comply with provider obligations due to modifications or branding decisions that bring them under the high-risk category.


    Example: European banks are likely Incidental Deployers(TM). Banks using tools like HireVue for high-risk areas, such as hiring, face added risk due to stringent regulatory frameworks. Banks may invest more in compliance or choose providers with stronger documentation to mitigate this risk, pushing providers like HireVue to invest in EU compliance measures.


Upcoming Challenges


2025 and 2026 will undoubtedly bring compliance challenges, especially for incidental deployers and other companies new to these responsibilities. Businesses in heavily regulated industries (e.g., financial services) will need to prioritize compliance measures, either by hiring consultants or building internal compliance teams.

As the EU begins implementing the Act, we’re likely to see some interesting developments in compliance practices—and perhaps a few lawsuits along the way.


What’s Next in This Series?


I’m turning this into a series to explore more aspects of the EU AI Act. Next time, I’ll cover risk classifications, breaking down what’s considered “prohibited,” “high-risk,” and “low-risk” under the Act. Have a specific question? Drop me a message in the contact section, and sign up for my newsletter if you’d like more insights (or if you’re just lurking).




Annex : Compliance Summary by Role

Category

Providers

Deployers

Importers

Authorized Representatives

Compliance Verification

Conduct and document conformity assessments, ensuring the AI system meets EU requirements.

Use AI systems as intended and monitor performance.

Verify AI system compliance and confirm provider’s conformity assessment.

Ensure the provider has conducted conformity assessment and compliance before market entry.

Technical Documentation

Prepare, maintain, and update comprehensive technical documentation, including system design and operation.

Ensure input data quality and monitor performance.

Verify availability of technical documentation.

Maintain technical documentation and make it available for authorities if requested.

Human Oversight

Design and provide guidelines for effective human oversight and intervention if needed.

Implement human oversight mechanisms as specified by the provider.



Record-Keeping

Maintain all documentation, including the declaration of conformity, for ten years after market entry.

Maintain logs to support traceability and accountability.

Retain a copy of the declaration of conformity and related documents for ten years.

Retain EU declaration of conformity and other documents for ten years.

Risk Management

Implement a risk management system to identify, evaluate, and mitigate risks throughout the system lifecycle.

Identify, monitor, and mitigate any operational risks.

Monitor and take corrective actions for non-compliance.

Assist provider in corrective actions if non-compliance is identified.

Transparency

Ensure system transparency, including clear user instructions and any impact on individuals.

Inform affected individuals, particularly when their rights or safety could be impacted.



Corrective Actions

Implement corrective actions for any detected non-compliance or risks, including product recalls if needed.

Implement corrective actions as necessary for non-compliance or unexpected risks.

Take corrective actions or halt distribution if the AI system poses risks or is non-compliant.

Coordinate with provider for corrective actions, including potential recalls or withdrawals if needed.

Authority Cooperation

Provide technical documentation and other requested information to authorities, facilitating audits.


Provide documentation or information to authorities as required.

Act as the main EU contact for authorities, providing information and documents as requested.

Non-Compliance Reporting

Notify authorities about serious incidents or malfunctions that present risks to health, safety, or rights.


Report any significant non-compliance to the national authorities if it poses serious risk.

Inform authorities of any risks or non-compliance issues identified.


Comments

Share Your ThoughtsBe the first to write a comment.

© 2025 by Devika Shanker-Grandpierre

bottom of page