Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

Sign images built #116

Open
DennisDenuto opened this issue Nov 22, 2021 · 1 comment
Open

Sign images built #116

DennisDenuto opened this issue Nov 22, 2021 · 1 comment

Comments

@DennisDenuto
Copy link

Describe the problem/challenge you have

Having the ability to authenticate the entity that created the container image is a good security practice to prevent malicious / unexpected images to be used in the cluster.

Folks today can use tools like cosign to perform image signing. However for in-cluster built images, that may have been loaded only into the container runtime, this becomes somewhat difficult to do.

Description of the solution you'd like

  • The ability for an operator to sign the built image using their private key
  • The ability for an operator to configure a builder to use a specific private key

If both private keys are specified, two signatures would be attached to the built image. One signature being tied to the 'entity' that triggered the kubectl build, and the other signature tied to the in-cluster 'builder' of the image.

cosign also supports keyless signing, which aims to make signing images as "easy as possible". It is still currently an experimental feature, but it would be great (assuming this is accepted) if the design changes made took that method into account.

Vote on this request

This is an invitation to the community to vote on issues. Use the "smiley face" up to the right of this comment to vote.

  • 👍 "This project will be more useful if this feature were added"
  • 👎 "This feature will not enhance the project in a meaningful way"
@dhiltgen
Copy link
Contributor

Thanks for the request! Here's a proposed approach.

Architecture

There are three potential places where we could implement image signing in kubectl build. We could contribute changes upstream into BuildKit to implement signing directly inside the Exporter. We could implement signing in a wrapper on top of the builder but within the builder pod running inside the cluster. Lastly, we could implement signing in the CLI itself, which runs remote to the kubernetes cluster and builder.

There are multiple signing approaches in discussion across the cloud native ecosystem. While we are supportive of upstreaming a signing solution into BuildKit proper, focusing on this approach at first may not yield the quickest path to get a working solution in the hands of users. We should be able to define a UX that enables us to adapt to an upstream BuildKit native signing solution when/if it becomes available.

Within this project, the fastest path to provide a viable signing solution to users is to implement in the CLI first. We can set up our UX to support builder-based signing in the future, be that via a wrapper on BuildKit or BuildKit native. Signing within the CLI also makes it more straightforward to support multi-arch image signing, as that is currently implemented purely CLI-side. The downside to a CLI only signing model is it requires the system where the CLI is running to be authenticated to the applicable cloud provider for cloud-based KMS to function, where the builder running within the cloud could have been able to rely on automatic node-based authentication. If this becomes a sticking point for users, we can add builder-based signing in a follow up enhancement.

Supported Signing Models

Cosign supports multiple signing models. In general, these should be largely abstracted from the implementation by the Cosign code, however there are some nuances to consider.

  • Local file based keys (PEM)
    • this will be straightforward to access in the CLI from the local filesystem
  • cloud KMS-based keys - supports Hashicorp Vault, AWS KMS, GCP KMS, Azure Key Vault - https://github.com/sigstore/cosign/blob/main/KMS.md
    • These will generally require authentication to the cloud provider, which must be performed outside of the CLI on the system where the CLI is running. (e.g. ~/.aws/credentials, etc.)
  • keys generated on hardware tokens using the PIV interface(not included in the standard release/must be built manually) - https://github.com/sigstore/cosign/blob/main/TOKENS.md
    • Hardware tokens tend to require native libraries, which can cause headaches for building platform neutral Go binaries. We likely want to set this up to allow users to build their own copy of the CLI if they want support for hardware tokens. We should update the Makefile to make this straightforward by setting some env var, or running a specific target, and document how users can do this on their own.
  • Kubernetes-secret based keys
apiVersion: v1
kind: Secret
metadata:
  name: testsecret
  namespace: default
type: Opaque
data:
  cosign.key: LS0tLS1CRUdJTiBFTkNSWVBURUQgQ09TSUdOIFBSSVZBVEUgS0VZLS0tLS[...]==
  cosign.password: YWJjMTIz
  cosign.pub: LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQo[...]==
  • (Experimental - Not production ready) Keyless signatures using the Fulcio CA
    • Two modes
      • URL displayed that a human has to navigate to, then they get a choice to log in via GitHub, Google, or Microsoft accounts.
      • OIDC token injected up-front to authenticate with the CA
    • https://github.com/sigstore/cosign/blob/main/KEYLESS.md#identity-tokens
    • For the manual mode, we will need to prompt the user, and ideally, not block the build while waiting but allow parallelization (only block the final signing step if the user hasn’t completed the SSO dance before the build finishes)

UX

We should aim for a forward looking UX that could support different signing approaches (not exclusive to Cosign) as well as builder based signing models, and potentially local (non-push) signing in the future.

% kubectl build --help
Start a build
. . . 
Flags:
      . . . 
      --sign                     Enable image signing for this build
      --cosign-key string        local path or uri to a Cosign key
                                 Set with env var COSIGN_KEY
      --identity-token string    OIDC Identity Tokens for non-interactive keyless Cosign image signing
                                 COSIGN_EXPERIMENTAL=1 must be set to enable keyless signing

We will assume key generation/setup is handled outside of the scope of this tool, and will include documentation guiding users how to set this up. Future kubectl buildkit ... UX could be added to access the cosign generate-key-pair capabilities and provide key management if there is user demand for a single solution covering key generation and build-time image signing.

The initial implementation will only support signing when push is specified. Attempting to specify --sign without --push will produce an error guiding the user to push if they want signing enabled. In the future we can explore if local signatures can be supported directly in containerd based runtimes.

Any important or time consuming steps during the signing process should be logged through the progress reporting capability in the system so it is rendered in the build progress/status, along with timing information. Similar to https://github.com/vmware-tanzu/buildkit-cli-for-kubectl/blob/main/pkg/build/build.go#L655

The cosign key string will be parsed to determine if it is a URI. If not a URI, it will be assumed to be a local file path. Special casing based on the scheme in the URI will be implemented as needed. To reduce repetitive typing, the user may set COSIGN_KEY in their environment to record the key they want to use and only specify --sign when building/pushing.

Keyless support will follow upstream Cosign’s experimental status and require COSIGN_EXPERIMENTAL=1 be set in the CLIs environment. With this env var set, only --sign will be required to sign an image. If a cosign key is specified by flag or env var, that key will be used instead of keyless. For interactive signing, the CLI should attempt to open the SSO web portal and notify the user on console before starting the build, but not block proceeding with the build. This will enable the user to perform the SSO authentication while the build is running. If the build completes inside the builder before the SSO has completed, the CLI should re-echo to console the need to authenticate to sign and block until complete (or the user cancels the build.) A reasonable timeout should be set so if this mode is accidentally enabled in an automated system (e.g. CI build) it will fail eventually with a clear error message and guidance to use --identity-token for non-interactive builds in keyless mode.

Given the experimental status of keyless and the added complexity of the user interaction, we might consider splitting this portion out into a follow-up PR so we can get basic signing support merged quicker.

Design

At this time, Cosign does not have a “clean” API, but rather, requires vendoring the CLI logic and calling into CLI code to accomplish signing tasks. We should consider developing a thin facade to keep the mainline code clean and once upstream work is completed to create a stable API for cosign, we can easily adapt. This abstraction will also help facilitate unit test coverage - https://github.com/sigstore/cosign/blob/main/cmd/cosign/cli/sign/sign.go

Single Architecture Manifests

At or around https://github.com/vmware-tanzu/buildkit-cli-for-kubectl/blob/278c70c3ecaf9e48226205ee1110830d441c8afc/pkg/build/build.go#L759 the Solve results come back as something like:

&client.SolveResponse{ExporterResponse:map[string]string{"containerimage.config.digest":"sha256:65691a977b3ad87992b85322afc3260b56ad10387a47b57e138bfdfc5f091628", "containerimage.digest":"sha256:a5e0fabce750be1b233ed2448982cd74a3635c56abcae9d22875092ebf13f668", "image.name":"acme.com/dummy:latest"}}

Of particular interest, the “containerimage.digest” and “image.name” represent the manifest and tag of the image that was built and optionally pushed. If pushed, and signing is enabled, this is roughly where the CLI would wire up calls to Cosign to pull the manifest in question, sign and push the signature back to the registry based on the image.name repo name.

Multi-arch Manifest Lists

Manifest lists are currently constructed purely in the CLI (not the builder.) The final Combined Manifest List will be signed, around https://github.com/vmware-tanzu/buildkit-cli-for-kubectl/blob/278c70c3ecaf9e48226205ee1110830d441c8afc/pkg/build/build.go#L674

Testing

In addition to unit testing the new code, we should be able to develop an end-to-end test with our local registry suite https://github.com/vmware-tanzu/buildkit-cli-for-kubectl/blob/main/integration/suites/localregistry_test.go For example, a new test case patterned after TestBuildWithPush https://github.com/vmware-tanzu/buildkit-cli-for-kubectl/blob/main/integration/suites/localregistry_test.go#L279 could generate a random/throw-away local key, and sign with that. At present, we do not verify content after it is pushed into the registry. It may be worthwhile investing in some additional utility code in this local registry suite to be able to exec into a pod inside the cluster where we can poke at the registry API via curl or equivalent to perform deeper validation of the various scenarios that push content to the local registry.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants