High-risk obligations land 2 August 2026. Every non-EU provider of a high-risk AI system — or a GPAI model above the compute threshold — needs an authorised representative in the EU, an Annex IV technical file, and a post-market monitoring plan. We handle all three. Plus the GDPR piece for training data, and the DSA piece if your users post. One engagement, every AI-specific obligation.
Your obligations scale with how the system is used, not your revenue. A two-person seed startup with a credit-scoring model has the same Annex IV obligations as an enterprise. Classify first, stack second.
Social scoring, manipulative AI, untargeted facial scraping, real-time biometric ID in public spaces. Not a compliance problem — these are prohibited outright.
Annex III: creditworthiness, employment decisions, education admissions, safety components, law enforcement, migration. Obligations: Art. 22 representative, Annex IV technical file, risk management, post-market monitoring.
Chatbots, emotion recognition, deepfakes, biometric categorization. Obligations: users must know they're interacting with AI / that content is AI-generated. Art. 50 transparency labels.
Spam filters, recommendation engines, game AI, most search and retrieval. No mandatory obligations — voluntary codes of conduct and optional labelling.
Prohibitions (Tier 01) and AI literacy (Art. 4) already apply. GPAI rules applied February 2025. The big milestone for most AI startups is 2 August 2026 — the date high-risk system obligations take full effect.
Tier 01 bans live. Art. 4 AI literacy obligation applies to all providers and deployers.
General-purpose model providers: transparency documentation, copyright policy, training summary.
Appoint Art. 22 rep. Draft Annex IV technical file. Build risk management + post-market plan. This is the runway.
Tier 02 obligations enforceable. Missing Annex IV, unnamed representative, or absent conformity assessment = market-surveillance exposure.
Sector-specific high-risk systems (safety components under product-safety regulations) catch up to Annex III timeline.
We are here. 101 days separate us from 2 Aug 2026 for most customers — comfortable if you start now, uncomfortable if you wait. Annex IV technical files take 4–8 weeks for a well-documented system, longer if documentation has been informal.
The representative is the quick win. Filing the Art. 22 appointment takes about a week. It buys you a real EU contact address for the AI Office and gives you a named accountable party in the dossier — a thing investors and enterprise buyers now specifically ask about in diligence.
Annex IV is the long pole. Nine sections of technical documentation — general description, design spec, data governance, risk management, human oversight, performance metrics, conformity, post-market plan, declaration. We draft the skeleton from your existing engineering and model-card docs; you fill in the model-specific parts.
The AI Act is the headline. It doesn't replace GDPR (for training data) or DSA (if users post). An AI company in scope of all three picks up the bundle — typically three products, sometimes four, rarely more.
Article 22 appointment · Annex IV technical file · conformity assessment · post-market monitoring plan.
Article 27 coverage · DPA correspondence · especially relevant for training data and user-facing inference.
Hosted trust hub · DSR inbox · sub-processor list (including your LLM vendors) · versioned policies.
Article 13 rep · notice endpoint. Relevant for AI products where users share prompts or content publicly.
Named DPO for AI companies processing personal data at scale · Art. 37–39 · filed with the lead DPA.
30-second risk-tier classification for your specific system. Free, no sign-up — starts the conversation.
It depends on use, not model. The same LLM is minimal-risk in a grammar checker, limited-risk in a chatbot, and high-risk in a resume screener. Annex III spells out eight high-risk use cases: creditworthiness, employment decisions, education admissions, law enforcement, migration, critical infrastructure, access to public services, safety components. Our free risk classifier walks you through it.
Article 53–55 already applies: publish a training-data summary, respect copyright opt-outs, provide downstream-provider documentation, and (if you cross the 10²⁵ FLOP systemic-risk threshold) notify the AI Office within two weeks. We provide the Art. 54 representative for non-EU GPAI providers.
Yes — they're different regulations with different supervisory authorities. Art. 22 of the AI Act covers the AI Office; Art. 27 of GDPR covers the national data-protection authorities. We can provide both through the same engagement, but they're distinct legal appointments.
A documented plan for tracking your system's performance after deployment — drift monitoring, incident logs, bias reviews. Plus a duty to report serious incidents to market-surveillance authorities within 15 days (or 2 days for mass harm). We draft the plan template; you run it.
For a team that already has a model card, eval reports, and a reasonable engineering doc: roughly 40% of Annex IV is new writing — mostly Risk Management (sec. 3), Human Oversight (sec. 5), and Post-market Monitoring (sec. 8). The rest is restating existing internal documentation in the Annex IV structure.
Fines up to €35M or 7% of global turnover, whichever is higher — for the worst breaches (prohibited practices). High-risk non-compliance caps at €15M / 3%. Practically: the first enforcement wave will target visible, named, large providers with obvious gaps (missing rep, no Annex IV). Early-stage startups are less likely to be first in line, but they're also not immune.
30-minute discovery call. We'll classify your systems, scope the Annex IV work, and quote the stack — including the Art. 22 rep — in writing.