A cluster provider for OpenMCP that uses kind (Kubernetes IN Docker) to provision and manage Kubernetes clusters. This provider enables you to create and manage multiple Kubernetes clusters running as Docker containers, making it ideal for:
- Local Development: Quickly spin up multiple clusters for testing multi-cluster scenarios
- E2E Testing: Automated testing of multi-cluster applications and operators
- CI/CD Pipelines: Lightweight cluster provisioning for testing environments
Before using this cluster provider, ensure you have:
- Docker: Running Docker daemon with socket accessible
- kind: kind CLI tool installed
- kubectl: For interacting with Kubernetes clusters
To run the cluster-provider-kind on your local machine, you need to first bootstrap an openMCP environment by using openmcp-operator. A comprehensive guide will follow soon.
In order to run the cluster-provider-kind properly, it is necessary to configure the openMCP Platform cluster with the Docker socket mounted into the nodes. Please use the following kind cluster configuration:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/host-docker.sockYou can create the Platform cluster using the above configuration by running:
kind create cluster --name platform --config ./path/to/config
kubectl config use-context kind-platformFor current testing reasons, it is recommended to test and run the cluster-provider-kind in the cluster. To run the latest version of your changes in your local environment, you need to run:
task build:img:buildThis will build the image of the cluster-provider-kind and push it to your local Docker registry.
docker images ghcr.io/openmcp-project/images/cluster-provider-kindYou can then apply the ClusterProvider resource to your openMCP Platform cluster:
apiVersion: openmcp.cloud/v1alpha1
kind: ClusterProvider
metadata:
name: kind
spec:
image: ghcr.io/openmcp-project/images/cluster-provider-kind:... # latest local docker image build
extraVolumeMounts:
- mountPath: /var/run/docker.sock
name: docker
extraVolumes:
- name: docker
hostPath:
path: /var/run/host-docker.sock
type: SocketYou can also run the cluster-provider-kind outside the cluster, but this is not recommended at the moment, since the operator might not work as expected due to some IP address issues with your Docker network access.
The following steps will help you to run the cluster-provider-kind outside the cluster:
- Initialize the CRDs:
go run ./cmd/cluster-provider-kind/main.go init- Run the operator:
KIND_ON_LOCAL_HOST=true go run ./cmd/cluster-provider-kind/main.go runNote: When running the operator outside the cluster (locally), you must set the
KIND_ON_LOCAL_HOSTenvironment variable totrue. This tells the operator to use the local Docker socket configuration instead of the in-cluster configuration.
You can configure cluster provider kind to provision clusters that use a local container image registry.
- Follow the official documentation to create registry container: local container image registry.
- Prepare a kind-config and containerd hosts.toml that have to be available in your cluster provider container, e.g. by injecting these files when initially creating the platform cluster:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/host-docker.sock
- hostPath: /path/to/containerd/certs.d # on your local machine because of the host docker.sock
containerPath: /etc/containerd/certs.dIt is important to note that containerd registry configuration expects a certain directory tree to pick up a hosts.toml. Following the official docs example, the tree has to look as follows on every node:
/etc
βββ containerd/
β βββ certs.d/
β βββ kind-registry:5001/
β βββ hosts.tomlWhile kind won't have any complaints when you try to mount the hosts.toml directly via extraMounts, the resulting docker run execution will misinterpret 5001 as volume option and fail. So instead of providing the hosts.toml directly, you need to mount a config directory containing the certs.d subtree with the hosts.toml.
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/host-docker.sock
- hostPath: /path/to/config.yaml
containerPath: /etc/kind/config.yaml
- hostPath: /path/to/containerd/config/certs.d # on your local machine that contains the subtree kind-registry:5001/hosts.toml
containerPath: /etc/containerd/certs.d- Apply a ClusterProvider resource to your openMCP platform cluster that uses the kind-config and hosts.toml from the platform cluster to create new clusters:
apiVersion: openmcp.cloud/v1alpha1
kind: ClusterProvider
metadata:
name: kind
spec:
image: ghcr.io/openmcp-project/images/cluster-provider-kind:... # latest local docker image build
extraVolumeMounts:
- mountPath: /var/run/docker.sock
name: docker
- mountPath: /etc/kind/config.yaml
name: kindconfig
- mountPath: /etc/containerd/certs.d
name: registryconfig
extraVolumes:
- name: docker
hostPath:
path: /var/run/host-docker.sock
type: Socket
- name: kindconfig
hostPath:
path: /etc/kind/config.yaml
type: File
- name: registryconfig
hostPath:
path: /etc/containerd/certs.d
type: Directory
env:
- name: KIND_CONFIG_FILE
value: /etc/kind/config.yaml| Variable | Required | Default | Description |
|---|---|---|---|
ACCESS_REQUEST_SERVICE_ACCOUNT_NAMESPACE |
No | "accessrequests" | Namespace where AccessRequest service accounts are created |
KIND_CONFIG_FILE |
No | "" | Configure kind cluster creation |
Create a new kind cluster by applying a ClusterRequest resource:
apiVersion: clusters.openmcp.cloud/v1alpha1
kind: ClusterRequest
metadata:
name: mcp
namespace: default
spec:
purpose: mcpkubectl apply -f clusterrequest.yamlTo quickly set up a complete OpenMCP local development environment, use the provided setup script:
./hack/local-dev.sh deployThis script automatically:
- Creates a KinD cluster with Docker socket access
- Pulls and loads required Docker images
- Deploys the OpenMCP operator
- Installs the cluster provider for kind
- Deploys service providers (Crossplane, Landscaper, Gateway)
- Installs Flux2
- Waits for all components to be ready
You can control which service providers are deployed using environment variables:
DEPLOY_SP_CROSSPLANE=false DEPLOY_SP_LANDSCAPER=true ./hack/local-dev.sh deployAvailable deployment control variables (all default to true):
DEPLOY_SP_CROSSPLANE- Deploy Crossplane service providerDEPLOY_SP_LANDSCAPER- Deploy Landscaper service providerDEPLOY_SP_GATEWAY- Deploy Gateway platform service
In case kind fails to load container images from the local docker store, configure docker's /etc/docker/daemon.json as follows:
{
"features": {
"containerd-snapshotter": false
}
}To obtain a kubeconfig for the platform cluster, the script creates an AccessRequest and waits for it to be granted. The resulting kubeconfig is written to a temporary file:
./hack/local-dev.sh access-platform-clusterAlternatively, you can switch your current kubectl context directly to the platform cluster:
./hack/local-dev.sh access-platform-cluster --forceThis remembers your previous context and prints a command to switch back.
To reset your environment and delete all KinD clusters:
./hack/local-dev.sh reset # Prompts for confirmation
./hack/local-dev.sh reset --force # Skip confirmationFor more options:
./hack/local-dev.sh helpTo test your local changes, you can build the images locally and then use them in the deployment script by overriding the image variables:
# Build your local image
task build:img:build
# Deploy using your local image
OPENMCP_CP_KIND_IMAGE=ghcr.io/openmcp-project/images/cluster-provider-kind:local ./hack/local-dev.sh deployAvailable image override variables:
OPENMCP_OPERATOR_IMAGE- Override the OpenMCP operator imageOPENMCP_CP_KIND_IMAGE- Override the cluster provider kind image
To build the binary locally, you can use the following command:
task buildTo build the image locally, you can use the following command:
task build:img:buildTo run the unit tests locally, you can use the following command:
task testTo generate the CRDs, DeepCopy functions, and other boilerplate code, you can use the following command:
task generateIn order to create new kind clusters from within a kind cluster, the Docker socket (usually /var/run/docker.sock) needs to be available to the cluster-provider-kind pod. As a prerequisite, the Docker socket of the host machine must be mounted into the nodes of the platform kind cluster. In this case, there is only a single node (platform-control-plane). The socket can then be mounted by the cluster-provider-kind pod using a hostPath volume.
flowchart TD
subgraph HostMachine
DockerSocket
subgraph "platform-control-plane"
/var/run/docker.sock
cluster-provider-kind
end
DockerSocket -- extraMount --> /var/run/docker.sock
/var/run/docker.sock -- volumeMount --> cluster-provider-kind
subgraph mcp-control-plane
SomeResource
end
subgraph mcp-workload
SomePod
end
cluster-provider-kind -- creates --> mcp-control-plane
cluster-provider-kind -- creates --> mcp-workload
end
style HostMachine fill:#eee
The kind configuration for the platform cluster may look like this:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/host-docker.sockIn order to test that the socket is functional, a simple pod can be deployed:
apiVersion: v1
kind: Pod
metadata:
name: docker-test
spec:
containers:
- image: docker:29.2.0-cli-alpine3.23
name: docker-test
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker
command:
- sleep
- "3600"
volumes:
- name: docker
hostPath:
path: /var/run/host-docker.sock
type: SocketAfter installing kind, it should be possible to create a new kind cluster on the level of the host machine: kind create cluster --name test
$ kind create cluster --name test
Creating cluster "test" ...
β Ensuring node image (kindest/node:v1.31.0) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-test"
You can now use your cluster with:
kubectl cluster-info --context kind-test
Thanks for using kind! π
This can be verified by running kind get clusters directly on the host machine:
$ kind get clusters
platform
test
Solution: Increase the inotify resource limits. See https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files.
This project is open to feature requests/suggestions, bug reports etc. via GitHub issues. Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our Contribution Guidelines.
If you find any bug that may be a security problem, please follow our instructions at in our security policy on how to report it. Please do not create GitHub issues for security-related doubts or problems.
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone. By participating in this project, you agree to abide by its Code of Conduct at all times.
Copyright 2025 SAP SE or an SAP affiliate company and cluster-provider-kind contributors. Please see our LICENSE for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available via the REUSE tool.