Skip to main content
This is a preview/beta feature. Further changes are expected.

Overview

chainloop trace hooks into Claude Code and Git to automatically capture what happens during AI-assisted coding sessions — models used, tokens consumed, tools invoked, code changes produced, and more. Every git push bundles this data into a cryptographically signed attestation using the CHAINLOOP_AI_CODING_SESSION material type. This gives security and engineering teams full visibility into how AI agents contribute to your codebase, without changing how developers work.
chainloop trace currently supports Claude Code sessions. Support for additional AI coding agents is planned.

Why Trace AI Coding Sessions

AI coding agents are productive — but without visibility, teams can’t answer basic questions:
  • Which commits were AI-assisted, and what models were used?
  • How much did a session cost in tokens and dollars?
  • What tools did the agent invoke, and how many times?
  • What code changes did the agent produce?
chainloop trace answers all of these automatically. Combined with Chainloop’s policy engine, you can enforce governance rules — for example, requiring that AI-assisted commits use approved models or stay within token budgets.

How It Works

chainloop trace init installs lightweight hooks into both Git and Claude Code that work together:
  1. When a Claude Code session starts, a hook records the session and begins capturing session data. As the agent edits files, chainloop trace monitors every change — snapshotting files before and after each edit to record exactly which lines the AI modified.
  2. When you commit, a hook detects which sessions contributed to the commit. A single commit can include changes from multiple sessions — or a mix of AI and human edits.
  3. When you push, a pre-push hook builds the full evidence: session metadata, token usage, tool invocations, and per-line code attribution showing which lines were written by AI vs human. The evidence is signed and pushed to Chainloop as an attestation.
After the push, the local session data is cleaned up automatically. The attestation lives in Chainloop, tied to your project and workflow.
AI Coding Session

Getting Started

Prerequisites

Initialize Tracing

Run chainloop trace init from your repository root:
chainloop trace init --project my-project
This command:
  1. Creates the .git/chainloop-trace/ directory for local state
  2. Installs Git hooks (post-commit and pre-push)
  3. Installs Claude Code hooks (SessionStart, PreToolUse, PostToolUse, and SessionEnd) into .claude/settings.json
If you already have Git hooks in place, chainloop trace backs them up automatically and chains them — your existing hooks continue to run as before.
If your repository has a .chainloop.yml file with a projectName field, you can omit the --project flag — the CLI resolves it automatically.

Work as Usual

Once initialized, there’s nothing else to do. Write code with Claude Code, commit, and push. The hooks handle everything in the background.
# Start a Claude Code session and work on your code
# ... make changes, commit ...

git push origin my-branch
# pre-push hook automatically creates the attestation
The pre-push hook only creates attestations for commits that are linked to a Claude Code session. Regular (non-AI-assisted) commits pass through without any overhead.

Remove Tracing

To uninstall all hooks and clean up local state:
chainloop trace uninstall
This removes the Git hooks, the Claude Code hooks from .claude/settings.json, and the .git/chainloop-trace/ directory. If existing hooks were backed up during installation, they are restored.

How Attribution Works

Chainloop trace goes beyond tracking which commits are AI-assisted — it knows exactly which lines of code were written by AI and which by a human.

Per-Line Tracking

Every time an AI agent edits a file (via Edit, Write, or MultiEdit tools), chainloop trace captures a snapshot before and after the change. By diffing these snapshots, it records the exact line ranges the AI modified — giving you line-level attribution for every file.

Multiple Sessions per Commit

A single commit can include changes from multiple AI sessions, or a mix of AI and human edits. Chainloop trace detects all contributing sessions automatically — you don’t need to commit separately for each session. For example, if session A edits handler.go, session B edits config.go, and you commit both together, the evidence correctly attributes each file to its respective session.

Attribution at Push Time

When you push, chainloop trace combines the per-line tracking data with the git diff to produce the final attribution:
  • Each file gets a label: ai or human
  • AI-attributed files include the exact line ranges the agent touched
  • Aggregate stats break down total lines added/removed by AI vs human
All hooks are non-blocking — if they fail, they log a warning but never prevent commits or pushes from succeeding.

What Gets Captured

The CHAINLOOP_AI_CODING_SESSION evidence includes:

Session Metadata

  • Session ID, start/end time, and duration
  • Claude Code version and session slug

Model and Token Usage

  • Primary model and all models used during the session
  • Input, output, and cache tokens
  • Estimated cost in USD (based on published Anthropic pricing)

Tool Usage

  • Every tool the agent invoked (Read, Write, Edit, Bash, Grep, etc.)
  • Invocation count per tool and total invocations

Conversation Summary

  • Total messages, user messages, and assistant messages

Subagent Details

  • Each subagent spawned: type, description, and token usage

Git Context

  • Repository URL, branch, and working directory
  • Commit range (start SHA to end SHA) and full list of commits
  • Merge-base detection against main/master

Code Changes

  • Files modified, created, deleted, renamed, or copied
  • Lines added and removed (per-file and aggregate)

Code Attribution

  • Per-file attribution label: ai or human
  • Exact line ranges the AI agent touched (1-indexed, inclusive)
  • Session IDs: which sessions modified each file
  • Aggregate breakdown: AI lines added/removed vs human lines added/removed

Evidence Structure

{
  "schema_version": "v1",
  "agent": {
    "name": "claude-code",
    "version": "1.0.0"
  },
  "session": {
    "id": "abc123-...",
    "slug": "my-feature-work",
    "started_at": "2026-03-31T10:00:00Z",
    "ended_at": "2026-03-31T10:45:00Z",
    "duration_seconds": 2700
  },
  "model": {
    "primary": "claude-opus-4-6",
    "provider": "anthropic",
    "models_used": ["claude-opus-4-6", "claude-haiku-4-5-20251001"]
  },
  "usage": {
    "input_tokens": 125000,
    "output_tokens": 42000,
    "cache_read_input_tokens": 80000,
    "cache_creation_input_tokens": 15000,
    "total_tokens": 167000,
    "estimated_cost_usd": 1.675
  },
  "tools_used": {
    "summary": [
      { "tool_name": "Read", "invocation_count": 23 },
      { "tool_name": "Edit", "invocation_count": 12 },
      { "tool_name": "Bash", "invocation_count": 8 }
    ],
    "total_invocations": 43
  },
  "conversation": {
    "total_messages": 47,
    "user_messages": 12,
    "assistant_messages": 35
  },
  "subagents": [
    {
      "type": "Explore",
      "description": "Search codebase for patterns",
      "token_usage": { "input_tokens": 8000, "output_tokens": 3000 }
    }
  ],
  "git_context": {
    "repository": "[email protected]:acme/backend.git",
    "branch": "feat/new-api",
    "commit_start": "a1b2c3d",
    "commit_end": "e4f5g6h",
    "commit_count": 3,
    "commits": [
      "a1b2c3d feat(docs): document user authentication",
      "abab354 fix style",
      "e4f5g6h add more examples on supported auth providers"
    ]
  },
  "code_changes": {
    "files_modified": 5,
    "files_created": 2,
    "lines_added": 240,
    "lines_removed": 45,
    "ai_lines_added": 180,
    "ai_lines_removed": 20,
    "human_lines_added": 60,
    "human_lines_removed": 25,
    "files": [
      {
        "path": "src/handler.go",
        "status": "modified",
        "lines_added": 45,
        "lines_removed": 10,
        "attribution": "ai",
        "line_ranges": [
          { "start": 12, "end": 34 },
          { "start": 50, "end": 72 }
        ],
        "session_ids": ["abc123-..."]
      },
      {
        "path": "README.md",
        "status": "modified",
        "lines_added": 5,
        "lines_removed": 2,
        "attribution": "human"
      }
    ]
  }
}

Applying Policies

Define CHAINLOOP_AI_CODING_SESSION in your contract to attach policies to traced sessions:
contract.yaml
apiVersion: chainloop.dev/v1
kind: Contract
metadata:
  name: ai-session-governance
spec:
  materials:
    - type: CHAINLOOP_AI_CODING_SESSION
      name: ai-coding-session
  policies:
    materials:
      - ref: file://check-approved-models.yaml

Example: Restrict to Approved Models

check-approved-models.yaml
apiVersion: chainloop.dev/v1
kind: Policy
metadata:
  name: check-approved-models
  description: Ensure AI coding sessions only use approved models
spec:
  policies:
    - kind: CHAINLOOP_AI_CODING_SESSION
      embedded: |
        package main

        import rego.v1

        valid_input if {
          input.data.model.models_used
        }

        approved_models := {"claude-opus-4-6", "claude-sonnet-4-6"}

        violations contains msg if {
          valid_input
          some model in input.data.model.models_used
          not model in approved_models
          msg := sprintf("Model '%s' is not approved for AI coding sessions.", [model])
        }

Example: Enforce Token Budget

check-token-budget.yaml
apiVersion: chainloop.dev/v1
kind: Policy
metadata:
  name: check-token-budget
  description: Flag sessions that exceed a token budget
spec:
  policies:
    - kind: CHAINLOOP_AI_CODING_SESSION
      embedded: |
        package main

        import rego.v1

        valid_input if {
          input.data.usage.total_tokens
        }

        max_tokens := 500000

        violations contains msg if {
          valid_input
          input.data.usage.total_tokens > max_tokens
          msg := sprintf("Session used %d tokens, exceeding the %d token budget.", [input.data.usage.total_tokens, max_tokens])
        }

Example: Limit AI-Authored Code Ratio

check-ai-code-ratio.yaml
apiVersion: chainloop.dev/v1
kind: Policy
metadata:
  name: check-ai-code-ratio
  description: Flag sessions where AI-authored code exceeds a threshold
spec:
  policies:
    - kind: CHAINLOOP_AI_CODING_SESSION
      embedded: |
        package main

        import rego.v1

        valid_input if {
          input.data.code_changes.lines_added > 0
        }

        max_ai_ratio := 80

        violations contains msg if {
          valid_input
          total := input.data.code_changes.lines_added
          ai := input.data.code_changes.ai_lines_added
          ratio := (ai * 100) / total
          ratio > max_ai_ratio
          msg := sprintf("AI authored %d%% of added lines (%d/%d), exceeding the %d%% threshold.", [ratio, ai, total, max_ai_ratio])
        }

Inspecting Traces in the Platform

Once a trace attestation has been pushed, you can inspect it directly in the Chainloop Web UI. Navigate to the workflow run that contains the CHAINLOOP_AI_CODING_SESSION material.

Rendered View

The platform renders a structured summary of the session — model usage, token consumption, estimated cost, tool invocations, code changes, and per-line attribution — all in an easy-to-read format.
AI Coding Session material view showing session metadata, token usage, cost, git context, code changes, and tool invocations
The attribution breakdown shows which files were modified by AI vs human, with exact line ranges and aggregate statistics.
AI coding session showing per-file code attribution with AI vs human line breakdown

Raw View

Switch to the raw view to see the full JSON evidence as captured by the hooks. This is useful for debugging policies or understanding the exact data available for Rego evaluation.
AI Coding Session raw JSON view showing the full evidence schema with session, git context, and code changes
You can inspect any CHAINLOOP_AI_CODING_SESSION material the same way you inspect other evidence types — click on the material in the workflow run details to toggle between rendered and raw views.

Relationship to AI Config Collection

chainloop trace and the AI config collector are complementary:
AI Config CollectorChainloop Trace
What it capturesStatic configuration files (CLAUDE.md, settings, MCP config, rules, skills)Runtime session data (tokens, tools, code changes, costs)
When it runsDuring chainloop attestation init --collectors aiagentAutomatically on every git push via hooks
Material typeCHAINLOOP_AI_AGENT_CONFIGCHAINLOOP_AI_CODING_SESSION
Use caseGovernance over how agents are configuredVisibility into what agents actually did
Use both together for full coverage: the config collector ensures agents are set up correctly, while trace ensures they behave as expected at runtime.