Skip to content

openmcp-project/cluster-provider-kind

REUSE status

Cluster Provider kind

About this project

A cluster provider for OpenMCP that uses kind (Kubernetes IN Docker) to provision and manage Kubernetes clusters. This provider enables you to create and manage multiple Kubernetes clusters running as Docker containers, making it ideal for:

  • Local Development: Quickly spin up multiple clusters for testing multi-cluster scenarios
  • E2E Testing: Automated testing of multi-cluster applications and operators
  • CI/CD Pipelines: Lightweight cluster provisioning for testing environments

πŸ§ͺ Prerequisites

Before using this cluster provider, ensure you have:

  • Docker: Running Docker daemon with socket accessible
  • kind: kind CLI tool installed
  • kubectl: For interacting with Kubernetes clusters

πŸ—οΈ Installation

Local Development

To run the cluster-provider-kind on your local machine, you need to first bootstrap an openMCP environment by using openmcp-operator. A comprehensive guide will follow soon.

In order to run the cluster-provider-kind properly, it is necessary to configure the openMCP Platform cluster with the Docker socket mounted into the nodes. Please use the following kind cluster configuration:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/host-docker.sock

You can create the Platform cluster using the above configuration by running:

kind create cluster --name platform --config ./path/to/config
kubectl config use-context kind-platform

For current testing reasons, it is recommended to test and run the cluster-provider-kind in the cluster. To run the latest version of your changes in your local environment, you need to run:

task build:img:build

This will build the image of the cluster-provider-kind and push it to your local Docker registry.

docker images ghcr.io/openmcp-project/images/cluster-provider-kind

You can then apply the ClusterProvider resource to your openMCP Platform cluster:

apiVersion: openmcp.cloud/v1alpha1
kind: ClusterProvider
metadata:
  name: kind
spec:
  image: ghcr.io/openmcp-project/images/cluster-provider-kind:... # latest local docker image build
  extraVolumeMounts:
    - mountPath: /var/run/docker.sock
      name: docker
  extraVolumes:
    - name: docker
      hostPath:
        path: /var/run/host-docker.sock
        type: Socket

Running Cluster Provider kind outside the cluster (not recommended)

You can also run the cluster-provider-kind outside the cluster, but this is not recommended at the moment, since the operator might not work as expected due to some IP address issues with your Docker network access.

The following steps will help you to run the cluster-provider-kind outside the cluster:

  1. Initialize the CRDs:
go run ./cmd/cluster-provider-kind/main.go init
  1. Run the operator:
KIND_ON_LOCAL_HOST=true go run ./cmd/cluster-provider-kind/main.go run

Note: When running the operator outside the cluster (locally), you must set the KIND_ON_LOCAL_HOST environment variable to true. This tells the operator to use the local Docker socket configuration instead of the in-cluster configuration.

Running Cluster Provider kind with a local registry

You can configure cluster provider kind to provision clusters that use a local container image registry.

  1. Follow the official documentation to create registry container: local container image registry.
  2. Prepare a kind-config and containerd hosts.toml that have to be available in your cluster provider container, e.g. by injecting these files when initially creating the platform cluster:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
containerdConfigPatches:
  - |-
    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = "/etc/containerd/certs.d"
nodes:
  - role: control-plane
    extraMounts:
      - hostPath: /var/run/docker.sock
        containerPath: /var/run/host-docker.sock
      - hostPath: /path/to/containerd/certs.d # on your local machine because of the host docker.sock
        containerPath: /etc/containerd/certs.d

It is important to note that containerd registry configuration expects a certain directory tree to pick up a hosts.toml. Following the official docs example, the tree has to look as follows on every node:

/etc
β”œβ”€β”€ containerd/
β”‚   └── certs.d/
β”‚       └── kind-registry:5001/
β”‚           └── hosts.toml

While kind won't have any complaints when you try to mount the hosts.toml directly via extraMounts, the resulting docker run execution will misinterpret 5001 as volume option and fail. So instead of providing the hosts.toml directly, you need to mount a config directory containing the certs.d subtree with the hosts.toml.

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
containerdConfigPatches:
  - |-
    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = "/etc/containerd/certs.d"
nodes:
  - role: control-plane
    extraMounts:
      - hostPath: /var/run/docker.sock
        containerPath: /var/run/host-docker.sock
      - hostPath: /path/to/config.yaml
        containerPath: /etc/kind/config.yaml
      - hostPath: /path/to/containerd/config/certs.d # on your local machine that contains the subtree kind-registry:5001/hosts.toml
        containerPath: /etc/containerd/certs.d
  1. Apply a ClusterProvider resource to your openMCP platform cluster that uses the kind-config and hosts.toml from the platform cluster to create new clusters:
apiVersion: openmcp.cloud/v1alpha1
kind: ClusterProvider
metadata:
  name: kind
spec:
  image: ghcr.io/openmcp-project/images/cluster-provider-kind:... # latest local docker image build
  extraVolumeMounts:
    - mountPath: /var/run/docker.sock
      name: docker
    - mountPath: /etc/kind/config.yaml
      name: kindconfig
    - mountPath: /etc/containerd/certs.d
      name: registryconfig
  extraVolumes:
    - name: docker
      hostPath:
        path: /var/run/host-docker.sock
        type: Socket
    - name: kindconfig
      hostPath:
        path: /etc/kind/config.yaml
        type: File
    - name: registryconfig
      hostPath:
        path: /etc/containerd/certs.d
        type: Directory
  env:
  - name: KIND_CONFIG_FILE
    value: /etc/kind/config.yaml

Environment Variables

Variable Required Default Description
ACCESS_REQUEST_SERVICE_ACCOUNT_NAMESPACE No "accessrequests" Namespace where AccessRequest service accounts are created
KIND_CONFIG_FILE No "" Configure kind cluster creation

πŸ“– Usage

Creating a Cluster via ClusterRequest

Create a new kind cluster by applying a ClusterRequest resource:

apiVersion: clusters.openmcp.cloud/v1alpha1
kind: ClusterRequest
metadata:
  name: mcp
  namespace: default
spec:
  purpose: mcp
kubectl apply -f clusterrequest.yaml

πŸ§‘β€πŸ’» Development

Quick Setup with Local Development Script

To quickly set up a complete OpenMCP local development environment, use the provided setup script:

./hack/local-dev.sh deploy

This script automatically:

  • Creates a KinD cluster with Docker socket access
  • Pulls and loads required Docker images
  • Deploys the OpenMCP operator
  • Installs the cluster provider for kind
  • Deploys service providers (Crossplane, Landscaper, Gateway)
  • Installs Flux2
  • Waits for all components to be ready

You can control which service providers are deployed using environment variables:

DEPLOY_SP_CROSSPLANE=false DEPLOY_SP_LANDSCAPER=true ./hack/local-dev.sh deploy

Available deployment control variables (all default to true):

  • DEPLOY_SP_CROSSPLANE - Deploy Crossplane service provider
  • DEPLOY_SP_LANDSCAPER - Deploy Landscaper service provider
  • DEPLOY_SP_GATEWAY - Deploy Gateway platform service

In case kind fails to load container images from the local docker store, configure docker's /etc/docker/daemon.json as follows:

{
  "features": {
    "containerd-snapshotter": false
  }
}

Accessing the Platform Cluster

To obtain a kubeconfig for the platform cluster, the script creates an AccessRequest and waits for it to be granted. The resulting kubeconfig is written to a temporary file:

./hack/local-dev.sh access-platform-cluster

Alternatively, you can switch your current kubectl context directly to the platform cluster:

./hack/local-dev.sh access-platform-cluster --force

This remembers your previous context and prints a command to switch back.

Resetting the Environment

To reset your environment and delete all KinD clusters:

./hack/local-dev.sh reset         # Prompts for confirmation
./hack/local-dev.sh reset --force # Skip confirmation

For more options:

./hack/local-dev.sh help

Using Locally Built Images

To test your local changes, you can build the images locally and then use them in the deployment script by overriding the image variables:

# Build your local image
task build:img:build

# Deploy using your local image
OPENMCP_CP_KIND_IMAGE=ghcr.io/openmcp-project/images/cluster-provider-kind:local ./hack/local-dev.sh deploy

Available image override variables:

  • OPENMCP_OPERATOR_IMAGE - Override the OpenMCP operator image
  • OPENMCP_CP_KIND_IMAGE - Override the cluster provider kind image

Building and Testing

To build the binary locally, you can use the following command:

task build

Build the image locally

To build the image locally, you can use the following command:

task build:img:build

Run unit tests locally

To run the unit tests locally, you can use the following command:

task test

Generating the CRDs, DeepCopy functions etc.

To generate the CRDs, DeepCopy functions, and other boilerplate code, you can use the following command:

task generate

πŸ‹οΈ How it works

Docker Socket Access

In order to create new kind clusters from within a kind cluster, the Docker socket (usually /var/run/docker.sock) needs to be available to the cluster-provider-kind pod. As a prerequisite, the Docker socket of the host machine must be mounted into the nodes of the platform kind cluster. In this case, there is only a single node (platform-control-plane). The socket can then be mounted by the cluster-provider-kind pod using a hostPath volume.

flowchart TD

subgraph HostMachine
    DockerSocket
    subgraph "platform-control-plane"
        /var/run/docker.sock
        cluster-provider-kind
    end

    DockerSocket -- extraMount --> /var/run/docker.sock
    /var/run/docker.sock -- volumeMount --> cluster-provider-kind

    subgraph mcp-control-plane
        SomeResource
    end

    subgraph mcp-workload
        SomePod
    end

    cluster-provider-kind -- creates --> mcp-control-plane
    cluster-provider-kind -- creates --> mcp-workload
end
style HostMachine fill:#eee
Loading

Platform Cluster Configuration

The kind configuration for the platform cluster may look like this:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/host-docker.sock

Testing Docker Socket Access

In order to test that the socket is functional, a simple pod can be deployed:

apiVersion: v1
kind: Pod
metadata:
  name: docker-test
spec:
  containers:
  - image: docker:29.2.0-cli-alpine3.23
    name: docker-test
    volumeMounts:
      - mountPath: /var/run/docker.sock
        name: docker
    command:
      - sleep
      - "3600"
  volumes:
    - name: docker
      hostPath:
        path: /var/run/host-docker.sock
        type: Socket

After installing kind, it should be possible to create a new kind cluster on the level of the host machine: kind create cluster --name test

$ kind create cluster --name test

Creating cluster "test" ...
 βœ“ Ensuring node image (kindest/node:v1.31.0) πŸ–Ό
 βœ“ Preparing nodes πŸ“¦
 βœ“ Writing configuration πŸ“œ
 βœ“ Starting control-plane πŸ•ΉοΈ
 βœ“ Installing CNI πŸ”Œ
 βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-test"
You can now use your cluster with:

kubectl cluster-info --context kind-test

Thanks for using kind! 😊

This can be verified by running kind get clusters directly on the host machine:

$ kind get clusters

platform
test

Troubleshooting

ERROR: failed to create cluster: could not find a log line that matches...

Solution: Increase the inotify resource limits. See https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files.

❀️ Support, Feedback, Contributing

This project is open to feature requests/suggestions, bug reports etc. via GitHub issues. Contribution and feedback are encouraged and always welcome. For more information about how to contribute, the project structure, as well as additional contribution information, see our Contribution Guidelines.

πŸ” Security / Disclosure

If you find any bug that may be a security problem, please follow our instructions at in our security policy on how to report it. Please do not create GitHub issues for security-related doubts or problems.

🀝 Code of Conduct

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone. By participating in this project, you agree to abide by its Code of Conduct at all times.

πŸ“‹ Licensing

Copyright 2025 SAP SE or an SAP affiliate company and cluster-provider-kind contributors. Please see our LICENSE for copyright and license information. Detailed information including third-party components and their licensing/copyright information is available via the REUSE tool.

About

Cluster provider kind manages the lifecycle of kind clusters.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors