Skip to content

EU AI Act

Status: Live since 2026-08-02 (deployed in v0.11.x). 19 rules in rule_book/v1/eu-ai-act/.

Scope

The EU AI Act applies to providers and deployers of AI systems placed on the market or put into service in the European Union, regardless of where the provider is established. Secruna's pack focuses on the high-risk categories listed in Annex III — biometrics, critical infrastructure, education and vocational training, employment and worker management, essential private and public services, law enforcement, migration and border control, and administration of justice.

The penalty bracket under Article 99 can reach EUR 35 million or 7% of worldwide annual turnover for prohibited-AI infringements; EUR 15 million or 3% for high-risk obligations. The numbers focus the mind.

What's in the pack

19 rules in rule_book/v1/eu-ai-act/ covering:

  • Annex III high-risk classification triggers
  • Article 9 risk management system
  • Article 10 data and data governance
  • Article 11 technical documentation (Annex IV)
  • Article 12 record-keeping (logging) requirements
  • Article 13 transparency and information provision
  • Article 14 human oversight
  • Article 15 accuracy, robustness, and cybersecurity
  • Article 16 obligations of providers
  • Article 26 obligations of deployers
  • Article 50 transparency for limited-risk systems

Each rule entry has a structured matcher over artifact_metadata, a customer-facing description (Plan 90), and a counsel-routing pointer for verdicts that need human sign-off.

Customer-facing surface

  • /inventory — every discovered AI system tagged with its EU AI Act risk classification.
  • /hitl/queue — the review queue for verdicts that need a human call.
  • Annex IV technical documentation export — per-AI-system PDF bundling the artifacts and verdict copy in the structure Article 11 requires.

Plan references

  • Plan 60 — initial v0.11 scoping for the EU AI Act pack.
  • Plan 65 — 17 Annex III synthetic Lambda fixtures with extractor validation.
  • Plan 89 P2 — the marketing /use-cases rewrite with article anchors so prospects can deep-link into specific obligations.
  • Plan 62 — rule book matcher schema (artifact_metadata), Phase 1 of the rule engine that the multi-framework work in Plan 96 generalised.

What's not in scope yet

Article 50 transparency obligations for general-purpose AI providers (GPAI tier) are scaffolded but the matchers are scoped to test fixtures pending counsel review (Plan 62 deferred status). Until Plan 97 stands up the counsel-routing infrastructure, those rules sit dormant rather than firing on real customer artifacts.

Analysis

Scope

The EU AI Act applies to providers, deployers, importers, and distributors of AI systems placed on the market or put into service inside the European Union, regardless of where the organisation is established. It catches any organisation outside the EU whose AI output is used inside the Union. The Act differentiates four risk tiers: prohibited (Article 5), high-risk (Annex III plus Annex I product-safety derivatives), limited-risk (Article 50 transparency), and minimal-risk (no obligations). Secruna's rule book targets the high-risk tier, where the operational compliance burden lives. Out of scope for our pack: military, defence, and national-security AI (Article 2(3)), which fall under domestic regimes such as the UK Defence AI Playbook; and pure research AI systems prior to placing on the market. General-purpose AI (GPAI) provider obligations under Articles 51-55 are scaffolded but not firing on customer artifacts pending counsel review.

Key obligations

  1. Risk management system (Article 9) — documented, iterative identification, evaluation, and mitigation of foreseeable risks across the AI system lifecycle.
  2. Data and data governance (Article 10) — training, validation, and test datasets must meet quality criteria including representativeness, relevance, and bias examination.
  3. Technical documentation (Article 11, Annex IV) — drawn up before placing on the market and kept current; the Annex IV structure is prescriptive.
  4. Record-keeping (Article 12) — automatic logs over the lifetime of the system, retained for at least six months.
  5. Transparency and information to deployers (Article 13) — instructions for use, intended purpose, performance limits.
  6. Human oversight (Article 14) — designed-in measures so a human can intervene, override, or stop the system.
  7. Accuracy, robustness, cybersecurity (Article 15) — declared metrics and resilience to error, fault, and adversarial input.
  8. Conformity assessment, CE marking, registration (Articles 43, 48, 49, 71) — pre-market gates plus EU database entry.

Our coverage approach

The Annex III high-risk classification triggers fire on discovery, so an inventory item lands in the right risk tier without manual tagging. Article 11 technical documentation maps onto the Annex IV PDF export, which bundles every artifact and verdict-copy paragraph in the structure the Article prescribes. Article 12 logging is satisfied by the audit log entries written on every state change, hash-chained and tenant-scoped. Article 13 deployer instructions flow through the verdict copy on each rule entry (Plan 90). Article 14 human oversight is the dashboard's HITL queue: any verdict flagged for review routes to a named org-admin or reviewer. Article 26 deployer obligations attach to the Firm AI Register export, which is the cross-framework tenant-wide inventory. Article 9 risk management sits in the inventory plus risks page combination — every inventory item carries a current verdict surface and a risk-state history. Article 15 accuracy and robustness is partially covered: extractor confidence scores feed the verdict, but post-deployment performance monitoring is a Phase 2 feature.

Gaps

We do not yet generate the EU database registration record (Article 71) — the customer fills that form by hand using the Annex IV export as input. Conformity assessment (Article 43) is the customer's notified-body route; we surface the artifacts they need but do not run the assessment. GPAI provider obligations (Articles 51-55) are scaffolded but firing only on test fixtures pending counsel review (Plan 62 deferred status). Post-market monitoring (Article 72) — the requirement to monitor performance once the AI system is in operation — has rule entries but the runtime telemetry hooks are Phase 2 work. Serious incident reporting (Article 73) — the fifteen-day reporting clock to authorities — is documented in the rule copy but the workflow automation (notify, draft, file) is on the Plan 76 notification routing backlog.

Customer impact

Non-compliance under Article 99 carries the highest penalties in EU digital regulation: up to EUR 35 million or 7% of worldwide annual turnover for prohibited-AI infringements, EUR 15 million or 3% for high-risk obligation breaches, and EUR 7.5 million or 1% for incorrect or misleading information to authorities. For a mid-market organisation those figures are existential. Beyond the fine schedule, the Act creates procurement gates: EU public-sector buyers and many private-sector counterparties now ask for evidence of EU AI Act compliance as a precondition of contract. Reputational damage compounds the financial penalty — enforcement actions are public, and EU regulators have signalled willingness to pursue non-EU providers whose AI output is used inside the Union. The calmer reading: organisations that get the artifacts in order before enforcement ramps avoid both the fine and the procurement freeze.