Skip to content

Commit 71f69aa

Browse files
authored
Merge pull request #50 from elmiko/update-readme
update artifacts for capi v1.+ compatibility
2 parents a828e55 + 5e21d27 commit 71f69aa

6 files changed

+43
-18
lines changed

README.md

+17-6
Original file line numberDiff line numberDiff line change
@@ -33,19 +33,22 @@ pretend to run pods scheduled to them. For more information on Kubemark, the
3333
[Kubemark developer guide][kubemark_docs] has more details.
3434

3535
## Getting started
36+
37+
**Prerequisites**
38+
* Ubuntu Server 22.04
39+
* clusterctl v1.1.4
40+
3641
At this point the Kubemark provider is extremely alpha. To deploy the Kubemark
3742
provider, you can add the latest release to your clusterctl config file, by
3843
default located at `~/.cluster-api/clusterctl.yaml`.
3944

4045
```yaml
4146
providers:
4247
- name: "kubemark"
43-
url: "https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/releases/v0.3.0/infrastructure-components.yaml"
48+
url: "https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/releases/v0.4.0/infrastructure-components.yaml"
4449
type: "InfrastructureProvider"
4550
```
4651
47-
*Note: the `v0.3.0` release of the kubemark provider has been tested with the `v0.1.\*` versions of Cluster API*
48-
4952
For demonstration purposes, we'll use the [CAPD][capd] provider. Other
5053
providers will also work, but CAPD is supported with a custom
5154
[template](templates/cluster-template-capd.yaml) that makes deployment super
@@ -61,16 +64,24 @@ Once initialized, you'll need to deploy your workload cluster using the `capd`
6164
flavor to get a hybrid CAPD/CAPK cluster:
6265

6366
```bash
64-
clusterctl config cluster wow --infrastructure kubemark --flavor capd --kubernetes-version 1.21.1 --control-plane-machine-count=1 --worker-machine-count=4 | kubectl apply -f-
67+
export SERVICE_CIDR="172.17.0.0/16"
68+
export POD_CIDR="192.168.122.0/24"
69+
clusterctl generate cluster wow --infrastructure kubemark --flavor capd --kubernetes-version 1.23.6 --control-plane-machine-count=1 --worker-machine-count=4 | kubectl apply -f-
6570
```
6671

72+
*Note: these CIDR values are specific to Ubuntu Server 22.04*
73+
6774
You should see your cluster come up and quickly become available with 4 Kubemark machines connected to your CAPD control plane.
6875

76+
To bring all the cluster nodes into a ready state you will need to deploy a CNI
77+
solution into the kubemark cluster. Please see the [Cluster API Book](https://cluster-api.sigs.k8s.io/user/quick-start.html?highlight=cni#deploy-a-cni-solution)
78+
for more information.
79+
6980
For other providers, you can either create a custom hybrid cluster template, or deploy the control plane and worker machines separately, specifiying the same cluster name:
7081

7182
```bash
72-
clusterctl config cluster wow --infrastructure aws --kubernetes-version 1.21.1 --control-plane-machine-count=1 | kubectl apply -f-
73-
clusterctl config cluster wow --infrastructure kubemark --kubernetes-version 1.21.1 --worker-machine-count=4 | kubectl apply -f-
83+
clusterctl generate cluster wow --infrastructure aws --kubernetes-version 1.23.6 --control-plane-machine-count=1 | kubectl apply -f-
84+
clusterctl generate cluster wow --infrastructure kubemark --kubernetes-version 1.23.6 --worker-machine-count=4 | kubectl apply -f-
7485
```
7586

7687
## Using tilt

clusterctl-settings.json

+2-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,8 @@
22
"name": "infrastructure-kubemark",
33
"config": {
44
"componentsFile": "infrastructure-components.yaml",
5-
"nextVersion": "v0.3.99"
5+
"nextVersion": "v0.4.99",
6+
"configFolder": "config"
67
}
78
}
89

config/manager/manager_image_patch.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,5 +7,5 @@ spec:
77
template:
88
spec:
99
containers:
10-
- image: gcr.io/cf-london-servces-k8s/bmo/cluster-api-kubemark/cluster-api-kubemark-controller:dev
11-
name: manager
10+
- image: quay.io/cluster-api-provider-kubemark/cluster-api-kubemark-controller-amd64:latest
11+
name: manager

metadata.yaml

+3
Original file line numberDiff line numberDiff line change
@@ -11,3 +11,6 @@ releaseSeries:
1111
- major: 0
1212
minor: 3
1313
contract: v1beta1
14+
- major: 0
15+
minor: 4
16+
contract: v1beta1

templates/cluster-template-capd.yaml

+18-8
Original file line numberDiff line numberDiff line change
@@ -46,11 +46,12 @@ metadata:
4646
namespace: "${NAMESPACE}"
4747
spec:
4848
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
49-
infrastructureTemplate:
50-
kind: DockerMachineTemplate
51-
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
52-
name: "${CLUSTER_NAME}-control-plane"
53-
namespace: "${NAMESPACE}"
49+
machineTemplate:
50+
infrastructureRef:
51+
kind: DockerMachineTemplate
52+
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
53+
name: "${CLUSTER_NAME}-control-plane"
54+
namespace: "${NAMESPACE}"
5455
kubeadmConfigSpec:
5556
clusterConfiguration:
5657
controllerManager:
@@ -60,11 +61,15 @@ spec:
6061
initConfiguration:
6162
nodeRegistration:
6263
criSocket: /var/run/containerd/containerd.sock
63-
kubeletExtraArgs: {eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'}
64+
kubeletExtraArgs:
65+
cgroup-driver: cgroupfs
66+
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
6467
joinConfiguration:
6568
nodeRegistration:
6669
criSocket: /var/run/containerd/containerd.sock
67-
kubeletExtraArgs: {eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'}
70+
kubeletExtraArgs:
71+
cgroup-driver: cgroupfs
72+
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
6873
version: "${KUBERNETES_VERSION}"
6974
---
7075
apiVersion: cluster.x-k8s.io/v1beta1
@@ -109,7 +114,12 @@ metadata:
109114
namespace: default
110115
spec:
111116
template:
112-
spec: {}
117+
spec:
118+
extraMounts:
119+
- name: containerd-sock
120+
containerPath: /run/containerd/containerd.sock
121+
hostPath: /run/containerd/containerd.sock
122+
type: Socket
113123
---
114124
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
115125
kind: KubeadmConfigTemplate

tilt-provider.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
{
22
"name": "kubemark",
33
"config": {
4-
"image": "gcr.io/cf-london-servces-k8s/bmo/cluster-api-kubemark/cluster-api-kubemark-controller",
4+
"image": "quay.io/cluster-api-provider-kubemark/cluster-api-kubemark-controller-amd64:latest",
55
"live_reload_deps": [
66
"main.go",
77
"go.mod",

0 commit comments

Comments
 (0)