all posts

compliance

EU AI Act Article 12 logging — the tooling question for observability vendors

Most observability vendors are neither providers nor deployers of AI systems under the EU AI Act. They are the tools that help deployers meet Article 12 logging and Article 14 human-oversight obligations. Here's the distinction that matters.

By Akshay Sarode· August 8, 2025· 12 min readai-acteuai-monitoringcompliance

EU AI Act Article 12 logging — the tooling question

Of the four major EU compliance frameworks an observability vendor has to think about — NIS2, DORA, GDPR, and the AI Act — the AI Act is the one most likely to be misunderstood by both vendors and customers. The misunderstanding usually goes: "we monitor AI agents; therefore we are subject to the AI Act." That framing is wrong. The AI Act regulates providers and deployers of AI systems, not the tools they use to satisfy their obligations. This post is about why that distinction matters and what an observability vendor actually signs up to.

Regulation (EU) 2024/1689 — the AI Act — entered into force on 1 August 2024 with a staggered application calendar:

  • 2 February 2025 — Chapters I-II in force (definitions, prohibited practices)
  • 2 August 2025 — General-Purpose AI (GPAI) provider obligations
  • 2 August 2026 — Most provisions for high-risk AI systems
  • 2 August 2027 — Full applicability for high-risk AI systems falling under existing Union harmonisation legislation (Annex I)

Today (April 2026), we are between the GPAI obligations milestone and the high-risk-AI-system milestone. The high-risk regime — which is where Articles 12 and 14 bite — becomes binding for new systems in August 2026. Customers are starting their compliance work now in anticipation.

The role taxonomy that actually matters

The AI Act distinguishes carefully between several roles in Article 3:

  • Provider (Art. 3(3)) — the natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places it on the market or puts it into service under their own name or trademark, whether for payment or free of charge.
  • Deployer (Art. 3(4)) — any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
  • Importer and Distributor — narrower categories for moving AI systems across borders or onto markets.
  • Authorised representative — for non-EU providers needing an EU presence.

Notice what isn't in that list: "tool vendor" or "observability vendor". The AI Act does not regulate tools that help deployers and providers meet their obligations. It regulates the AI systems themselves and the people who put them on the market or use them.

A SaaS observability platform that captures telemetry from AI agents a customer has built — recording prompts, responses, tool invocations, latency, errors — is not a provider (we did not develop or place the AI system on the market) and is not a deployer (we do not use the AI system; the customer does). We are a tool.

Article 12 — what deployers must do for logging

Article 12 applies to providers of high-risk AI systems and, by way of Article 26, cascades a derivative obligation onto deployers of those systems. The text:

Article 12(1): High-risk AI systems shall technically allow for the automatic recording of events ("logs") over the lifetime of the system.

Article 12(2): In order to ensure a level of traceability of the AI system's functioning that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: (a) identifying situations that may result in the AI system presenting a risk within the meaning of Article 79(1) or in a substantial modification; (b) facilitating post-market monitoring referred to in Article 72; (c) monitoring the operation of high-risk AI systems referred to in Article 26(5).

Article 19 then requires the provider (or where applicable, the deployer) to retain those logs for at least six months unless otherwise mandated by Union or national law.

The deployer-side obligation under Article 26(5) — "Deployers of high-risk AI systems shall, on the basis of the instructions for use accompanying the system, monitor the operation of the system" — is the practical hook. A deployer of, say, a high-risk hiring-screening AI system must monitor it for drift, errors, and risk indicators, and they must keep logs that survive an audit.

This is where observability vendors come in.

Where observability fits

Observability vendors are the way deployers actually do Article 12 + Article 26(5) at scale. The deployer's options for satisfying these obligations are:

  1. Build their own logging pipeline. Possible. Expensive. Most teams don't have the headcount.
  2. Use the AI system provider's bundled logging. Often inadequate — vendors of LLM APIs typically provide minimal log retention with limited query.
  3. Use a dedicated AI observability platform. This is what Sutrace's AI agents pillar does, alongside the broader observability product.

Option 3 is what most deployers are choosing through 2025-2026. But notice: the deployer is still the regulated party. The vendor enables the obligation; the vendor doesn't take on the obligation.

What a competent observability vendor signs up to:

  • Immutable logs — write-once, no edit, no delete-by-vendor. Customers can configure a retention policy; we honour it. Logs do not silently disappear.
  • Timestamps to ±50 ms of UTC — clock-sync is a real engineering problem and we solve it via NTP-disciplined ingestion timestamps.
  • Complete event surface — for AI agents, that means prompt, completion, system prompt, tool invocations, retrieval context, latency, model version, deployer-supplied user identifier, and the agent's chain of reasoning.
  • Exportable — CSV/JSON/Parquet. The deployer has to be able to hand the logs to a regulator without us in the loop.
  • Retention configurable — Art. 19's six-month minimum is easy. A deployer in regulated finance might want seven years; we support up to ten.

What a competent observability vendor does not sign up to:

  • Certifying that the deployer's AI system is compliant. We are not the conformity assessment body. We are not the notified body. We don't issue conformity declarations. Article 43 sets out the conformity-assessment regime; we are not in it.
  • Determining the risk classification. Annex III lists high-risk use-cases; the provider/deployer determines whether their system falls within them. We can advise based on what we observe, but the determination is theirs.
  • Pretending to be a "trusted intermediary". No such role exists in the AI Act.

Article 14 — human oversight

Article 14 requires high-risk AI systems to be designed and developed with human oversight measures. Article 26(2) cascades the deployer-side obligation: "Deployers of high-risk AI systems shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support."

Observability tooling is the substrate of human oversight. A human cannot oversee an AI system whose behaviour they cannot see in real time. So:

  • A real-time dashboard of AI agent activity is part of the oversight tooling.
  • Alert routing — "this agent's error rate exceeded 5% in the last hour" — is part of the oversight tooling.
  • Audit trail of human interventions (overrides, escalations, manual approvals) is part of the oversight tooling.

Sutrace's AI agents observability does these. Again — we don't do the human oversight. We provide the surface on which the customer's humans can do it.

The GDPR overlap

AI agent logs frequently contain personal data — the prompt may include a customer name, a purchase history, an email address. Under GDPR, those logs are processing of personal data, and the deployer is the controller, the observability vendor is the processor.

The cascade looks like this:

  1. The deployer determines the purposes and means of processing → controller under GDPR Art. 4(7).
  2. The deployer instructs Sutrace to log AI agent activity → Sutrace processes on behalf of the deployer → Sutrace is processor under Art. 4(8).
  3. Sutrace signs a DPA under Art. 28 → see the DPA page.

For high-risk AI systems handling personal data, this is the regime. The AI Act does not displace GDPR — Recital 9 of the AI Act explicitly preserves the GDPR. The two run in parallel, and a deployer must satisfy both.

What "AI Act compliant" means for vendors

Almost nothing.

If you read a vendor claiming to be "EU AI Act compliant", you should ask: "Compliant with what obligation? You're not a provider or a deployer of an AI system. The AI Act doesn't apply to you in the same way it applies to your customer."

The vendor-side obligations that do exist are mostly indirect:

  • If the vendor's product is itself an AI system being placed on the market — yes, then they're a provider, full Art. 16 obligations apply.
  • If the vendor's product is an AI tool used internally — they may be a deployer of their own AI tool.
  • If the vendor's product is sold as part of a high-risk AI system the customer assembles — they may end up regulated as a "component supplier" under Art. 25.

For most observability vendors, none of these apply. We are a tool. We help the deployer meet Article 12. We are not the regulated party.

That's not a dodge — it's the actual legal position. And the right move for an honest vendor is to articulate it clearly and explain what we do to make the deployer's job tractable, rather than vaguely asserting "AI Act compliance" as a marketing badge.

What Sutrace specifically commits to

For customers using Sutrace to monitor AI systems they have deployed, our compliance use-case page sets out what we sign:

  • AI agent logs are immutable and exportable
  • Timestamps are NTP-disciplined to ±50 ms UTC
  • Retention is customer-configurable, default 90 days, max 10 years
  • Logs include all required Article 12(2) categories — risk-relevant events, post-market monitoring inputs, monitoring of operation
  • Sub-processor list is published on the DPA page and changes on 30-day notice
  • Data plane is in europe-west3 (no transfer to third countries — see the DPF-survival post)

What we do not sign:

  • Vendor-side AI Act compliance certification (we are not subject to it)
  • Conformity assessment as a notified body (we are not one)
  • Sole liability for the deployer's misuse of our tools

Where this is going in 2026-2027

Two things to watch.

First, the European AI Office (established under Art. 64) will issue delegated acts and implementing acts through 2026 specifying detailed technical requirements for logging under Art. 12. The Joint Research Centre (JRC) is producing reference standards via CEN-CENELEC working groups; drafts are circulating now. These will likely become the de facto compliance benchmark.

Second, deployer questionnaires are starting to ask very specific things — log granularity, retention, hash-based integrity proofs, separation of personal-data fields, regulator-export formats. Vendors who can answer these crisply (we publish our log schema; we sign the audit cooperation clause; we hand over Parquet exports on demand) will close deals; vendors who hand-wave will not.

The observability for compliance use-case page tracks the regulatory regime as it evolves and points to the most current addendum templates.

External references