By segment · AI Companies

Ship AI into Europe, with the Act on your side.

High-risk obligations land 2 August 2026. Every non-EU provider of a high-risk AI system — or a GPAI model above the compute threshold — needs an authorised representative in the EU, an Annex IV technical file, and a post-market monitoring plan. We handle all three. Plus the GDPR piece for training data, and the DSA piece if your users post. One engagement, every AI-specific obligation.

Classify your system

The AI Act classifies by use, not by size.

Your obligations scale with how the system is used, not your revenue. A two-person seed startup with a credit-scoring model has the same Annex IV obligations as an enterprise. Classify first, stack second.

Tier 01 · Banned

Unacceptable risk

Social scoring, manipulative AI, untargeted facial scraping, real-time biometric ID in public spaces. Not a compliance problem — these are prohibited outright.

Tier 02 · Regulated Most AI YC cos

High-risk systems

Annex III: creditworthiness, employment decisions, education admissions, safety components, law enforcement, migration. Obligations: Art. 22 representative, Annex IV technical file, risk management, post-market monitoring.

Tier 03 · Transparency

Limited risk

Chatbots, emotion recognition, deepfakes, biometric categorization. Obligations: users must know they're interacting with AI / that content is AI-generated. Art. 50 transparency labels.

Tier 04 · Voluntary

Minimal risk

Spam filters, recommendation engines, game AI, most search and retrieval. No mandatory obligations — voluntary codes of conduct and optional labelling.

The 2 Aug 2026 timeline

High-risk goes live in under a quarter.

Prohibitions (Tier 01) and AI literacy (Art. 4) already apply. GPAI rules applied February 2025. The big milestone for most AI startups is 2 August 2026 — the date high-risk system obligations take full effect.

Feb 2025
Prohibited practices in force

Tier 01 bans live. Art. 4 AI literacy obligation applies to all providers and deployers.

Already live
Aug 2025
GPAI rules in force

General-purpose model providers: transparency documentation, copyright policy, training summary.

Already live
Now · 2026
High-risk prep window

Appoint Art. 22 rep. Draft Annex IV technical file. Build risk management + post-market plan. This is the runway.

AI Act Rep
2 Aug 2026
High-risk fully in force

Tier 02 obligations enforceable. Missing Annex IV, unnamed representative, or absent conformity assessment = market-surveillance exposure.

Enforcement
Aug 2027
Annex II high-risk

Sector-specific high-risk systems (safety components under product-safety regulations) catch up to Annex III timeline.

Phase-in

We are here. 101 days separate us from 2 Aug 2026 for most customers — comfortable if you start now, uncomfortable if you wait. Annex IV technical files take 4–8 weeks for a well-documented system, longer if documentation has been informal.

The representative is the quick win. Filing the Art. 22 appointment takes about a week. It buys you a real EU contact address for the AI Office and gives you a named accountable party in the dossier — a thing investors and enterprise buyers now specifically ask about in diligence.

Annex IV is the long pole. Nine sections of technical documentation — general description, design spec, data governance, risk management, human oversight, performance metrics, conformity, post-market plan, declaration. We draft the skeleton from your existing engineering and model-card docs; you fill in the model-specific parts.

Common questions

What AI founders ask us first.

Is our AI system actually high-risk?

It depends on use, not model. The same LLM is minimal-risk in a grammar checker, limited-risk in a chatbot, and high-risk in a resume screener. Annex III spells out eight high-risk use cases: creditworthiness, employment decisions, education admissions, law enforcement, migration, critical infrastructure, access to public services, safety components. Our free risk classifier walks you through it.

We're a GPAI provider (foundation model). What applies?

Article 53–55 already applies: publish a training-data summary, respect copyright opt-outs, provide downstream-provider documentation, and (if you cross the 10²⁵ FLOP systemic-risk threshold) notify the AI Office within two weeks. We provide the Art. 54 representative for non-EU GPAI providers.

Do we still need Art. 27 GDPR rep if we have an AI Act rep?

Yes — they're different regulations with different supervisory authorities. Art. 22 of the AI Act covers the AI Office; Art. 27 of GDPR covers the national data-protection authorities. We can provide both through the same engagement, but they're distinct legal appointments.

What does "post-market monitoring" actually mean?

A documented plan for tracking your system's performance after deployment — drift monitoring, incident logs, bias reviews. Plus a duty to report serious incidents to market-surveillance authorities within 15 days (or 2 days for mass harm). We draft the plan template; you run it.

Annex IV looks like a lot. How much of it is new for us?

For a team that already has a model card, eval reports, and a reasonable engineering doc: roughly 40% of Annex IV is new writing — mostly Risk Management (sec. 3), Human Oversight (sec. 5), and Post-market Monitoring (sec. 8). The rest is restating existing internal documentation in the Annex IV structure.

What happens if we miss 2 August 2026?

Fines up to €35M or 7% of global turnover, whichever is higher — for the worst breaches (prohibited practices). High-risk non-compliance caps at €15M / 3%. Practically: the first enforcement wave will target visible, named, large providers with obvious gaps (missing rep, no Annex IV). Early-stage startups are less likely to be first in line, but they're also not immune.

AI Act, handled.

30-minute discovery call. We'll classify your systems, scope the Annex IV work, and quote the stack — including the Art. 22 rep — in writing.