This documentation is for the proprietary Chainloop Platform. If you are looking for the Open Source Evidence Store documentation, please refer to the Chainloop Evidence Store guide

Introduction

The Chainloop platform has two top-level components: Chainloop Evidence Store and Chainloop Platform. These are deployed using two different Helm Charts.

Chainloop Evidence Store

  • The source code can be found in this repository https://github.com/chainloop-dev/chainloop
  • It is deployed using the Helm Chart that can be found upstream here or its relocated version here oci://chainloop.azurecr.io/chart/chainloop
  • It comprises two server-side components, a control plane, and a content-addressable proxy.

Chainloop Platform

  • The source code of the server components is the property of Chainloop Inc.
  • The Helm Chart can be found here oci://chainloop.azurecr.io/chart/chainloop-platform
  • It’s comprised of three server-side components, a backend, a frontend and a nats.io server.

The Chainloop platform components are meant to extend and enhance the capabilities of Chainloop OSS evidence store, and hence, they require access to a running instance of Chainloop Evidence Store.

Retrieve Private OCI Registry credentials.

To pull the Helm Charts and container images you’ll need to authenticate with a private OCI registry managed. If you don’t have them yet, please contact the Chainloop team.

Once you get the creds, you can give it a try by authenticating locally and pulling the latest version of both Helm Charts.

# Authenticate locally
$ docker login -u [username] -p [creds] chainloop.azurecr.io

# Chainloop Evidence Store Chart
$ helm fetch oci://chainloop.azurecr.io/chart/chainloop

# Chainloop Platform Chart
$ helm fetch oci://chainloop.azurecr.io/chart/chainloop-platform
$ ls chainloop-*

chainloop-1.77.2.tgz  chainloop-platform-0.12.0.tgz

Configure credentials for deployment

You have two options to consume the provided Helm Charts and container images in your Kubernetes cluster:

Option 1: Pull images and Chart directly from Private Azure Registry

To pull the images, you’ll need to store these credentials as a K8s secret in the cluster where you will deploy the Helm Charts. Later, they will be referenced in the Helm charts as imagePullSecrets.

$ kubectl create secret docker-registry regcred-azure \
    --docker-server=https://chainloop.azurecr.io \
    --docker-username=[your-username] \
    --docker-password=[your credentials]

Option 2: Relocate Helm Chart and images to your own registry

Both charts are compatible with relocation processes performed by the Helm Relocation Plugin. This means that you can easily import the Helm Charts and Container images provided by the Chainloop team into your registry, even behind the firewall.

This is a two-step process (wrap -> unwrap)

  • Pull all the container images and Helm chart and wrap them in an intermediate tarball.
  • If needed, transfer that tarball to your air-gap environment via an offline medium
  • Unwrap the tarball and push container images, update the Helm Chart with new image references, and push it to the target registry.

For example, to relocate from Azure to your own Harbor registry.

helm dt wrap oci://chainloop.azurecr.io/chart/chainloop-platform

# 🎉  Helm chart wrapped into "chainloop-platform-0.11.0.wrap.tgz"
# Now you can take the tarball to an air-gapped environment and unwrap it like this

helm dt unwrap chainloop-platform-0.11.0.wrap.tgz oci://my-repo.harbor.io --yes

#  Unwrapping Helm chart "chainloop-platform-0.11.0.wrap.tgz"
#    ✔  All images pushed successfully
#    ✔  Helm chart successfully pushed
#
# 🎉  Helm chart unwrapped successfully: You can use it now by running "helm install oci://my-repo.harbor.io/chainloop-platform --generate-name"

The relocated Helm Charts will be ready to be used, pointing to the images also relocated to the end registry.

Deploy and Configure in Kubernetes

The Chainloop platform has two top-level components: Chainloop Evidence Store and Chainloop Platform. These are deployed using two different Helm Charts.

We are working on an umbrella Helm Chart that will contain both evidence store and platform charts. This will greatly simplify the deployment, configuration and update process, stay tuned!

This is an example of a final and more detailed picture of how both Chainloop Evidence Store and Chainloop platform could look in AWS, leveraging AWS services such as RDS or AWS Secret Manager.

For more details on configuring each component, refer to their respective Readme files in the Helm Charts.

Step 1 - Deploy Evidence Store Chart

Please refer to Chart Readme file that can be found on the chart tarball that can be pulled like this

helm fetch oci://chainloop.azurecr.io/chart/chainloop

Step 2 - Deploy Platform Chart

Please refer to Chart Readme file that can be found on the chart tarball that can be pulled like this

helm fetch oci://chainloop.azurecr.io/chart/chainloop-platform

Once you have the two main components up and running, the next step is to configure Chainloop’s evidence store to leverage some advanced features by talking to the proprietary components.

Step 3 - Enable Built-in Library of Policies

The Chainloop platform backend exposes through its API a curated library of policies. To leverage it, some configuration changes needs to be done in the Chainloop Vault controlplane Helm chart values.yaml file.

Chainloop Chart values.yaml
controlplane:
  # tell the chainloop controlplane to pick the policies from the platform backend
  policy_providers:
    - name: builtin # keep this as is
      default: true # keep this as is
      # You can use the internal k8s service name for example
      #  url: http://chainloop-platform-backend.platform-prod.svc.cluster.local/v1
      url: [your-backend-api.com]/v1

Once you do it, you should be able to update your contracts with policies and policy groups from the Chainloop Platform like this.

Example Contract file
# This is a Chainloop Workflow Contract file
# That attaches policies and groups from the Chainloop Platform just by referencing them by name
schemaVersion: v1
policies:
  attestation:
    - ref: source-commit
  materials:
    - ref: sbom-ntia
policyGroups:
  - ref: sbom-quality

You can learn more about how to use policies here.

Step 4: Configure Controlplane and CAS clients in backend

In order to enable advanced features like manual proof compliance capabilities and more, you need to configure the Chainloop Platform backend to be able to talk to the Chainloop Controlplane and CAS.

This can be done by updating the platform backend Helm chart values.yaml file like this:

Platform Chart values.yaml
backend:
  controlplane_api:
    # You can use internal DNS names i.e my-controlplane.internal.svc.cluster.local
    # The gRPC endpoint for the controlplane API
    # for example chainloop-controlplane-api.chainloop-prod.svc.cluster.local:80
    addr: [your-controlplane-gRPC-api-hostname]:[port]
    insecure: false # set to true if you don't have enabled TLS authentication in the gRPC API
  
  cas_api:
    # You can use internal DNS names
    # for example chainloop-cas-api.chainloop-prod.svc.cluster.local:80
    addr: [your-cas-gRPC-api-hostname]:[port]
    insecure: false

Step 5 (optional) - Enable Audit logs

To receive audit log entries from Chainloop controlplane, you need to add the hostname of the Nats.io server that’s running as part of the platform chart. To find the hostname, you can query the service like this

kubectl get svc -l app.kubernetes.io/name=nats -o name
service/chainloop-platform-nats

In this example, chainloop-platform-nats is what you will use as the hostname or the FQDN chainloop-platform-nats.[my-namespace].svc.cluster.local. if you are connecting from another namespace.

Then you’ll update the Chainloop Vault controlplane Helm chart values.yaml file like this:

controlplane:
	# The Chainloop Platform backend exposes a NATS server that can be used to receive audit logs
	nats:
		# Enable the NATS server
		enabled: true
		# hostname where the NATS server is running, for example, [nats-service-name].[namespace].svc.cluster.local
		host: "my-nats-uri" # for example chainloop-platform-nats.platform-prod.svc.cluster.local

This example shows an unauthenticated Nats server configuration. We do not recommend this setup, please make sure you add authentication to your nats server and update the previous settings accordingly.

Once the setup is done, you can test loging in Chainloop and new events should be appearing in the audit log section in the UI

Step 6 (optional): Configure Keyless attestation for GitLab

1 - Configure Federated Authentication in Controlplane

Then you’ll update the Chainloop Vault controlplane Helm chart values.yaml file like this

Chainloop Chart values.yaml
controlplane:
  # enable federated authentication for the attestation
  federatedAuthentication:
    enabled: true
    # Example: http://chainloop-platform-backend.platform-prod.svc.cluster.local/machine-identity/verify-token
    url: [your chainloop platform backend hostname]/machine-identity/verify-token

2 - Enable Gitlab AuthN and AuthZ in backend

To support GitLab keyless attestation, the platform backend needs to be configured to be able to:

  • 2.1 - Authenticate the incoming tokens
  • 2.2 - Enable authorization, which means
    • Allow the user to onboard GitLab repositories
    • Authorize the incoming tokens against the onboarded repositories

To configure both things, you need to

2.1 - First, add a machine identity issuer, this is used

Platform Chart values.yaml
backend:
  # Support verification of gitlab tokens
  machineIdentityIssuers:
    - url: [your gitlab instance URL] # i.e https://gitlab.com or your self-hosted instance
      type: ISSUER_TYPE_GITLAB # do not change this

2.2 Configure GitLab OAuth2 application. Note that you’ll need to create one with read_api permissions in your GitLab instance

Platform Chart values.yaml
# In platform Chart
backend:
    oauth2Providers:
        - id: gitlab # do not change this
          url: [your gitlab instance URL] # i.e https://gitlab.com or your self-hosted instance
          client_id: REDACTED
          client_secret: REDACTED
          type: ISSUER_TYPE_GITLAB # do not change this

You are done! :)