all use cases

use case

Claude Code monitoring — track coding-agent sessions, tokens, cost, and failures

Monitor Claude Code and other local coding-agent sessions with Sutrace's agent CLI, then roll usage into the same dashboard as production LLM calls.


Claude Code monitoring

Coding agents are becoming real engineering infrastructure. They run for hours, call expensive models, touch repositories, and fail in ways provider invoices do not explain.

Sutrace monitors local coding-agent sessions through @sutrace/agent-cli and stores them in the same llm_calls stream used by production AI services.

What gets tracked

  • session id
  • turn index
  • model
  • input/output tokens
  • cache read/create tokens
  • estimated cost
  • duration
  • status and error code
  • repository path
  • git branch
  • host
  • tool version

Why it matters

Claude Code, Codex, Cursor, Aider, and similar tools create a new spend category: engineering-agent usage. The useful questions are not just "how much did Anthropic bill us?"

The useful questions are:

  • Which repository is using the most tokens?
  • Which branch or task caused the cost spike?
  • Which model is slowest for code review?
  • Which tool sessions fail repeatedly?
  • Which developers or hosts need a budget cap?

Install path

Use a Sutrace API key scoped to one agent asset:

export SUTRACE_API_KEY=sk_dev_...
npx @sutrace/agent-cli start

Events land in /agents and in the agent asset detail page.

First pilot workflow

  1. Pick one repository.
  2. Run the daemon for one week.
  3. Compare cost by branch, model, and session.
  4. Add a daily budget alert.
  5. Decide whether to roll it out to the whole engineering team.

What Sutrace deliberately avoids

Sutrace does not need to read your source code to make the first dashboard useful. It records metadata and usage. Repository names, branches, models, tokens, cost, duration, and errors are enough for first-stage spend visibility.