"Our model cards, training-data summary, and systemic-risk doc were ready before the August deadline. No scramble. We shipped to the EU on schedule."
If you're a non-EU provider of a high-risk AI system — or a GPAI model above the compute threshold — you must appoint an authorised representative in the EU, maintain an Annex IV technical file, and respond to market surveillance authorities. We handle all three.
The AI Act regulates by risk tier — and the obligations between tiers are a step function, not a slope. Most teams we talk to are either in High-Risk or Limited-Risk, often unclear which. We classify precisely.
Social scoring, real-time remote biometric ID in public, subliminal manipulation, untargeted scraping for facial databases. Prohibited outright since 2 Feb 2025.
Hiring & HR screening · creditworthiness · critical infrastructure · medical devices · law enforcement · border control · education admission · biometric categorisation. Full Annex III list. Requires: authorised rep, tech file, QMS, conformity assessment, CE mark, post-market monitoring, serious-incident reporting.
Chatbots, emotion recognition, deepfakes, AI-generated content. Transparency obligations: users must be told they're interacting with AI, content must be marked. Applies from 2 Aug 2026.
Spam filters, game NPCs, AI-enhanced productivity tools. Voluntary codes of conduct only.
Any GPAI provider — above or below the 10²⁵ FLOP threshold — has Chapter V obligations. Non-EU GPAI providers must appoint an authorised representative (Art. 54) regardless of risk tier. In force since 2 Aug 2025.
Or 7% of global annual turnover — for prohibited practices. The highest fine ceiling of any EU tech regulation, above both GDPR and DSA.
Or 3% of global turnover for most Chapter III obligations breaches — documentation, post-market monitoring, conformity.
Unacceptable-risk practices already enforced. National authorities are issuing guidance and first fines through 2025.
Full Chapter III applies. Representative, tech file, conformity assessment, CE mark, and post-market plan all required.
Named authorised representative registered with the EU's AI database. Listed on your documentation, in your CE declaration, and on your product UI where required.
Precise Annex III analysis for every system and use case. Documented decision, defensible under audit. Re-assessed when you ship material changes.
Structured, versioned, Annex IV–compliant. Stored for 10 years. Available on request to any market surveillance authority in any member state.
Self-assessment coordinated end-to-end — or notified body liaison where required. EU Declaration of Conformity drafted, signed, filed. CE marking support.
Monitoring plan, incident logging, serious-incident reporting to national authorities within 15 days. Annex VIII registration upkeep.
When authorities write in, we respond — in the national language, within deadline, with the right slice of the tech file attached.
Annex IV lists exactly what the technical file must contain for a high-risk system. Your authorised representative keeps it, updates it, and makes it available on request. We structure it, co-author it with your team, and store it for 10 years.
Your technical file lives in our secure archive — versioned, timestamped, readable by market surveillance authorities in any member state. When a request comes in, we respond with the relevant package within the statutory deadline. Your team doesn't scramble.
Unacceptable-risk uses banned outright. €35M fine ceiling already in force.
GPAI providers publish model cards, document training data, appoint EU rep if non-EU.
High-risk providers should have the tech file structured and rep appointment in progress.
Full Chapter III obligations apply. Authorised rep required. Conformity assessments due.
AI systems embedded in regulated products (medical devices, toys, lifts, etc.) fall under the full regime.
From frontier GPAI labs to hiring-tech startups to health-AI providers — Article 22/54 appointments filed, tech files structured, classification defensible.
"Our model cards, training-data summary, and systemic-risk doc were ready before the August deadline. No scramble. We shipped to the EU on schedule."
"We thought we were limited-risk. Their classification call put us in high-risk — that answer alone saved us a €15M exposure."
"The doctree on day one looked terrifying. By month three it was just another shared workspace. Regulators get a clean package."
"Post-market monitoring was a checkbox we'd have wing-ed. Their plan is sector-specific, real, and we've already filed two serious-incident reports."
Chatbots, deepfakes, generative tools, and GPAI below the 10²⁵ FLOP threshold. Covers Chapter IV transparency and GPAI Chapter V.
Full Chapter III programme for high-risk AI: Annex III systems in HR, credit, medical, critical infra, education, law enforcement.
GPAI models with systemic risk (> 10²⁵ FLOPs), notified-body AI products, or sector-regulated AI (medical devices, lifts, toys).
AI systems process personal data. That means GDPR and AI Act. Take both under one engagement, plus the public-facing trust hub enterprise buyers look for.
Art. 22 / 54 rep, Annex IV tech file, conformity & post-market monitoring.
Your AI processes EU personal data — Art. 27 rep is mandatory, and high-scrutiny.
Enterprise buyers ask for a public trust hub. Ship one in an afternoon.
Maybe. If you're GPAI (you train or serve a foundation model) and non-EU, Article 54 requires an authorised representative even for low-risk deployment. If you're limited-risk (chatbots, deepfakes, emotion-detection UI), you have transparency obligations but don't need a rep. We'll tell you in 30 minutes.
Annex III is the definitive list. The frequent ones for startups: employment screening, creditworthiness, education admission, law enforcement, and access to essential services. If your system is used in one of those contexts — even as a tool inside a larger product — you're in.
Most high-risk systems qualify for self-assessment. Exceptions: biometric identification systems, and Annex I products where a notified body is already in the loop for safety certification. We coordinate either path.
Yes — same entity, different mandates. One engagement, two appointments, unified archive. Most of our high-risk AI clients take both.
The UK is pursuing a sectoral, principles-based approach — no UK AI Act equivalent yet. We track it and will advise when a UK representative regime emerges.
Yes. Authorised representative appointments are terminable with reasonable notice. We help with the handover, file the change with the relevant market surveillance authority, and export your technical file. 30-day termination, no punitive clauses.
30-minute classification call. Tech-file index by next week. A named representative before 2 August 2026 — with everything Annex IV asks for.