Skip to main content
This feature is only available on Chainloop’s platform paid plans.

Overview

Chainloop can leverage Large Language Models (LLMs) to evaluate evidence using natural-language prompts. This allows you to define flexible, human-readable compliance checks that go beyond what traditional rule-based policies can express. There are two approaches:
  • Built-in evidence-prompt policy — No code needed. Define a prompt directly in your workflow contract and Chainloop handles the rest.
  • Custom Rego policies with chainloop.evidence_prompt — For advanced use cases where you need to combine AI analysis with programmatic logic.
Both approaches work with any AI provider integration you have configured (Anthropic or OpenAI).

Prerequisites

Before using LLM-driven policies, you need to register an AI Provider integration in your Chainloop organization. Navigate to Integrations and filter by AI Provider to see the available options: AI Provider integrations Register at least one AI provider (Anthropic or OpenAI) with a valid API key. See Integrations for detailed setup instructions.

Option 1: Using the built-in evidence-prompt policy

The simplest way to run LLM-driven evaluations is using the built-in evidence-prompt policy. It requires no Rego code — just a natural-language prompt. evidence-prompt policy detail The policy accepts a single required input, prompt, which describes what the AI should analyze in the evidence.

Adding to a workflow contract

Reference the evidence-prompt policy in your workflow contract under policies.attestation, policies.materials, or both depending on what you want to evaluate.
contract.yaml
schemaVersion: v1
policies:
  attestation:
    - ref: evidence-prompt
      with:
        prompt: "Check that all container images referenced in this attestation come from a trusted registry (e.g. ghcr.io or docker.io/chainloop)"
  materials:
    - ref: evidence-prompt
      with:
        prompt: "Analyze this SBOM and report any components with non-OSS compatible licenses such as AGPL, SSPL, or proprietary licenses"
materials:
  - type: SBOM_CYCLONEDX_JSON
    name: my-sbom
  • Under policies.attestation: the prompt runs against the full attestation envelope, useful for cross-material checks like verifying all container images come from trusted registries.
  • Under policies.materials: the prompt runs against each matching material individually (e.g., each SBOM), useful for per-artifact analysis like license compliance or vulnerability assessment.
You can use both attestation-level and material-level prompts in the same contract to layer different checks.

Evaluation results

Like any other policy, the evaluation results are cryptographically signed and embedded in the attestation. LLM-driven evaluations are clearly marked with an AI indicator so you can distinguish them from traditional rule-based checks. LLM policy evaluation results Each evaluation shows the prompt that was used as input and the violations returned by the AI provider, giving full traceability into what was checked and why it failed.

Option 2: Using chainloop.evidence_prompt in custom policies

For more control, you can call the chainloop.evidence_prompt builtin function from within a custom Rego policy. This lets you combine AI analysis with programmatic checks in a single policy. See How to write custom policies for the full custom policy workflow.

The chainloop.evidence_prompt function

result := chainloop.evidence_prompt(evidence, prompt)
  • evidence (string): a CAS digest (sha256:...) or raw evidence content
  • prompt (string): the prompt describing what to analyze
  • Returns an object with skipped (boolean) and violations (array of strings)
See the builtin functions reference for full details.

Example: combining AI analysis with programmatic checks

The following policy uses the AI prompt to find license issues, then adds a programmatic check to ensure a minimum number of components exist in the SBOM:
policy.rego
package main

import rego.v1

# AI-powered license analysis
violations contains msg if {
  result := chainloop.evidence_prompt(input.material.hash, "List any components with AGPL, SSPL, or proprietary licenses")
  not result.skipped
  some msg in result.violations
}

# Programmatic check: ensure SBOM has components
violations contains "SBOM contains no components" if {
  count(input.material.content.components) == 0
}

How it works

When a policy with an LLM prompt is evaluated during an attestation:
  1. Chainloop extracts the relevant evidence content (the full attestation or an individual material).
  2. The evidence content and your prompt are sent to whichever AI provider is configured in your organization.
  3. The LLM analyzes the evidence according to your prompt and returns any violations it finds.
  4. Those violations surface in the attestation results alongside violations from any other policies.