Check out our global CONTRIBUTING guidelines for Rust code conventions
The following tools are required to build and run this project:
- Docker: for building and running containerized workloads.
- Go: required by the
audit-scannerandcontrollercomponents. - Rust: required by the
policy-serverandkwctlcomponents. The exact version and required targets are defined inrust-toolchain.toml. - cross: required to cross-compile Rust code for different architectures.
- Make: the build tool controlling various build tasks.
- Tilt: a development tool for multi-service applications.
- Kubernetes cluster: a running Kubernetes cluster, used for development. kind or a similar solution.
- Helm: required for deploying and testing charts.
The repository is organized as follows:
charts: contains the Helm charts for managing deployments.cmd: main entry points for theaudit-scannerandkubewarden-controllerexecutables.crates: all Rust components of the project.docs: developer-focused documentation and README files for various components.e2e: end-to-end tests. A real Kubernetes cluster is created using Docker and Kind.internal: contains the entire Go codebase.
Format Go code:
make fmt-goRun the Go linter (golangci-lint):
make lint-goAutomatically fix linting issues when possible:
make lint-go-fixCheck Rust code formatting:
make fmt-rustRun the Rust linter (clippy):
make lint-rustAutomatically fix linting issues when possible:
make lint-rust-fixCheck Rust dependencies for known security vulnerabilities using cargo-deny:
make advisories-rustBuild all components (controller, audit-scanner, policy-server, kwctl):
make allBuild only the Go components:
make controller
make audit-scannerBuild only the Rust components:
make policy-server
make kwctlBuild Docker images for each component:
make controller-image
make audit-scanner-image
make policy-server-imageYou can customize the registry, repository, and tag using environment variables:
make controller-image REGISTRY=ghcr.io REPO=your-username TAG=devTo run the controller for development purposes, you can use Tilt.
The tilt-settings.yaml.example acts as a template for the
tilt-settings.yaml file that you need to create in the root of this
repository. Copy the example file and edit it to match your environment. The
tilt-settings.yaml file is ignored by git, so you can edit it without concern
about committing it by mistake.
The following settings can be configured:
-
registry: the container registry to push the controller image to. If you don't have a private registry, you can useghcr.ioprovided your cluster has access to it. -
audit-scanner: the name of the audit-scanner image. If you are usingghcr.ioas your registry, you need to prefix the image name with your GitHub username. -
controller: the name of the controller image. If you are usingghcr.ioas your registry, you need to prefix the image name with your GitHub username. -
policy-server: the name of the policy-server image. If you are usingghcr.ioas your registry, you need to prefix the image name with your GitHub username.
Example:
registry: ghcr.io
audit-scanner: your-github-username/kubewarden/audit-scanner
controller: your-github-username/kubewarden/controller
policy-server: your-github-username/kubewarden/policy-serverThe Tiltfile included in this repository takes care of the following:
- Creates the
kubewardennamespace. - Installs the
kubewarden-crdsandkubewarden-controllerHelm charts from thechartsfolder. - Injects the development images into the running Pods.
- Automatically reloads the controller/audit-scanner/policy-server when you make changes to the code.
To run the controller, you just need to run the following command against an empty cluster:
tilt upUse the web interface of Tilt to monitor the log streams of the different components and, if needed, manually trigger restarts.
After changing a CRD, run the following command:
make generateThis will:
- Update all the generated Go code
- Update the CRDs shipped by our Helm chart
Run all unit tests, regardless of the language:
make testRun e2e tests:
make test-e2eRun Helm chart unit tests:
make helm-unittestThe controller integration tests are written using the
Ginkgo and
Gomega testing frameworks. The tests are
located in the internal/controller package.
By default, the tests are run using envtest, which sets up an instance of etcd and the Kubernetes API server, without kubelet, controller-manager, or other components.
However, some tests require a real Kubernetes cluster to run. These tests are
defined under the e2e folder using the
e2e-framework.
The suite setup will start a cluster using kind and run the tests against it. It will also stop and remove the container when the tests finish.
Note that the e2e tests are slower than the envtest tests; therefore, it is
recommended to keep their number to a minimum. An example of a test that
requires a real cluster is the AdmissionPolicy test suite, since at the time
of writing, we wait for the PolicyServer Pod to be ready before reconciling
the webhook configuration.
You can focus on a specific test or spec by using a Focused Spec.
Example:
var _ = Describe("Controller test", func() {
FIt("should do something", func() {
// This spec will be the only one executed
})
})The script scripts/test-sigstore-e2e.sh provides an script allowing the test of
Kubewarden with a private Sigstore instance. It runs three sequential stages:
- Setup — Spins up a KinD cluster with the full Sigstore stack (Fulcio,
Rekor, CTLog, TUF) using
sigstore/scaffolding, and
generates trust configuration files in the current directory
(
trusted_root.json,trust_config.json,verification_config.yaml, etc.). - Sign — Copies a test policy to the local registry, signs it with cosign against the private Sigstore instance, then verifies with both cosign and kwctl.
- Deploy — Installs Kubewarden from local charts, configures the PolicyServer with the private Sigstore trust root, deploys a ClusterAdmissionPolicy, and exercises the webhook to confirm allow/deny behaviour.
Run all three stages end-to-end:
./scripts/test-sigstore-e2e.shEach stage can be skipped independently, which is useful when iterating without recreating the full environment:
# Cluster already running — skip setup
./scripts/test-sigstore-e2e.sh --skip-setup
# Policy already signed — skip signing
./scripts/test-sigstore-e2e.sh --skip-sign
# Only set up the Sigstore stack, skip Kubewarden
./scripts/test-sigstore-e2e.sh --skip-kubewardenTo learn more about the script, refer to its --help cli flag or read the
file.
The commit messages may follow the conventional commits standard. For example:
type: free form subject
Where type can be:
feat: used by commits introducing a new featurefix: used by commits fix an issueperf: used by commits improving performancerefactor: used by commits doing some code refactoring
Some examples:
feat: this is a new featurefix: this is fixing a reported bug
It's also possible to specify a component if this commit targets one component specifically.
feat(resolver): this adds a new solver strategy
- Check that
:latestbuilds of kubewarden-controller for main are fine, including kwctl - Open an automated release PR with https://github.com/kubewarden/kubewarden-controller/actions/workflows/open-release-pr.yml Set the desired kubewarden version.
- Review & merge automated PR
- Tag version in kubewarden-controller repo
- Wait for images to be built, so e2e tests can work
- Trigger automated PR that syncs adm controller charts with helm-chart repo https://github.com/kubewarden/helm-charts/actions/workflows/update-adm-controller.yaml
- Merge automated PR on helm-chart repo
- chart-releaser releases the charts on Helm chart repo.
- Developer Documentation: The
docs/folder contains additional documentation for each component (audit-scanner,controller,kwctl,policy-server, andcrds). - RFCs: Design proposals and architectural decisions are tracked in a separate repository at https://github.com/kubewarden/rfc.