Enforcement approaching

Is your AI pipeline ready for the EU AI Act?

August 2, 2026 is the enforcement deadline for high-risk AI systems under Annex III. RAG pipelines used in employment, finance, healthcare, or education are directly in scope.

€15M

or 3% of global turnover

Maximum penalty for high-risk AI violations (Annex III)

€35M

or 7% of global turnover

Maximum penalty for prohibited AI violations (Article 5)

What falls under Annex III?

These are the 8 high-risk AI use-case categories where strict obligations apply from August 2, 2026.

High-riskBiometrics

Remote biometric identification, categorisation, or emotion recognition systems

High-riskCritical infrastructure

AI in safety components of water, gas, heating, electricity, road traffic

High-riskEducation

Systems that determine access to educational institutions or assess students

High-riskEmployment

Recruitment, employee management, promotion, or performance evaluation

High-riskEssential services

Credit scoring, insurance risk, emergency dispatch prioritisation

High-riskLaw enforcement

Individual risk assessment, polygraphs, crime prediction, evidence analysis

High-riskMigration & border

Risk assessment of persons crossing borders, asylum evaluation

High-riskJustice & democracy

Factual research in court proceedings, electoral process influence

EU AI Act obligation → Gateco capability

How Gateco maps to the technical requirements for high-risk AI systems.

ArticleObligationGateco capability
Article 9 — Risk management systemEstablish and maintain a risk management system throughout the AI system lifecycle.
Policy-as-code with version history. Access Simulator dry-runs policy changes before going live.
Article 10 — Data governanceImplement data governance practices covering data sources, collection, processing, and classification.
Classification labels (public/internal/confidential/restricted) on every resource. Deny-by-default enforces classification-based access at retrieval time.
Article 12 — Record-keepingAutomatically log events to allow post-hoc monitoring and traceability.
25 audit event types. Every retrieval logged with principal, resource, policy, decision, and timestamp. 90-day retention; SIEM streaming on Enterprise.
Article 13 — Transparency to deployersProvide instructions for use, including limitations and conditions of safe operation.
Semantic Readiness L0–L4 shows deployers exactly which security guarantees are active. Fail-closed default prevents unsafe operation.
Article 14 — Human oversightEnable natural persons to oversee, intervene in, and halt the AI system.
Policy approval workflow keeps humans in the loop. Any policy can be instantly deactivated. Denial reasons are always surfaced.
Article 15 — Accuracy, robustness, cybersecurityAchieve appropriate levels of accuracy, robustness, and cybersecurity.
Fail-closed by default. Circuit breaker (5 errors/30s, half-open after 2 min). <25ms p95 policy overhead. TLS 1.3, AES-256 at rest.
Article 17 — Quality management systemImplement a quality management system with documented policies and procedures.
Policy Studio with draft/active/archived lifecycle. Policy versioning with diff viewer. Audit export for QMS documentation.
Article 72 — Post-market monitoringActively monitor deployed AI system performance and compliance over time.
Advanced analytics with retrieval trend insights. Audit export enables periodic review. SIEM streaming enables real-time monitoring.

Free 1-page Annex III mapping PDF

Print-ready one-pager mapping all 8 Annex III categories to Gateco capabilities. Share it with your legal or compliance team.

Frequently asked questions

Does the EU AI Act apply to non-EU companies?

Yes — the AI Act has extraterritorial scope. If your AI system affects people located in the EU, the Act applies regardless of where your company is headquartered. This mirrors the GDPR model.

What happens after August 2, 2026?

Prohibitions on unacceptable-risk AI (Article 5) take effect February 2, 2025. High-risk AI rules (Annex III) take effect August 2, 2026 — 24 months after the regulation entered into force. Penalties for non-compliance can reach €15M or 3% of global annual turnover for high-risk violations.

Is my RAG pipeline 'high-risk' under Annex III?

If your RAG pipeline makes or influences decisions in employment, credit/insurance, healthcare, education, law enforcement, or biometrics — yes, it likely falls under Annex III high-risk classification. When in doubt, apply the high-risk controls; the cost of compliance is far lower than the cost of a penalty.

What if the Digital Omnibus Act delays enforcement?

As of May 2026, the Digital Omnibus Act proposal is still in legislative process and does not formally delay the August 2 deadline. Treat August 2, 2026 as binding until the Omnibus Act is formally enacted with a new date.

Can Gateco alone make my AI Act-compliant?

No single tool makes you compliant — the AI Act requires a system-level approach. Gateco addresses the technical access control and audit trail requirements (Articles 9, 10, 12, 14, 15). You will still need model documentation, conformity assessment, and a registrar. Think of Gateco as the retrieval-layer compliance component of a broader AI governance programme.

Don't wait until August

Book a free 1-hour call. We'll walk through your RAG pipeline, map it to AI Act obligations, and identify the gaps — at no cost.