As part of an attestation process, you might want to collect different pieces of evidence such as Software Bill Of Materials (SBOMs), test results, runner logs, etc and then attach them to the final in-toto attestation.
Chainloop helps with this process by providing a Content Addressable Storage API proxy that:
You can setup as many CAS backends as you want, but you can only have one enabled as default at the time. This default backend will be used during the attestation process to store the pieces of evidence.
You can manage your CAS backends in the Storage Backends Section.
New CAS Backends will be added over time. If yours is not implemented yet, please let us know
Chainloop comes pre-configured with what we call an inline
backend that embeds the pieces of evidence in the resulting attestations.
Inline backend is useful to get started quickly but since the metadata is embedded in the attestation, its max size is limited.
We recommend to switch to a more robust backend such when moving to production.
Chainloop also supports storing artifacts in AWS S3 Blob Storage.
To connect your AWS account to Chainloop you’ll need:
Create an S3 bucket
Create an S3 bucket and take note of the bucket name and region
Create an IAM user with access to that bucket
Next we are going to create a policy that has write/read permissions to the bucket.
You can use the snippet below by just replacing [bucketName]
with the actual name of the bucket you created in the step before.
Then create an user, attach the policy to it and click on “create access Key”
Then select third-party service and copy the access key ID and secret access key
We are now ready to connect our AWS account to Chainloop
Chainloop also supports storing artifacts in Azure Blob Storage.
To connect your Azure storage account you’ll need the following information
We’ll walk you through the process of how to find this information
Register an application to create the service principal
First, you’ll need to register an application in your Azure Active Directory tenant. You can do this using the Azure CLI or from the Azure portal
Once done, in the application overview you should be able to find the tenantID, and Service principal ID
Next, let’s create a secret for the service principal
Create a storage account and give permissions to the service principal
Next, we’ll create a storage account (or you can use an existing one), take a note on the storage account name.
And once created, we’ll give permissions to the service principal, go to IAM assign-roles.
Search for the application we just registered and assign the Storage Blob Data Contributor role
At thi point we have all the information we need to connect our Azure storage account to Chainloop
Cloudflare R2 is compatible with AWS S3 and can be configured in Chainloop by providing a custom endpoint.
Pre-requisites
Follow this instructions to create a compatible AccessKeyID and SecretAccessKey. Then copy the bucket name and endpoint from the bucket settings.
Finally register the Cloudflare R2 bucket using the aws-s3
provider and providing the custom endpoint.
Minio is an S3-compatible blob storage that can be configured in Chainloop by providing a custom endpoint.
Pre-requisites
You can create a new AccessKey from the Minio console.
Then copy the bucket name and Minio endpoint.
Finally register the Minio bucket using the aws-s3
provider and providing the custom endpoint
If everything went well, you should be able to upload and download artifact materials, let’s give it a try