You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ kubectl get ds -n kube-system openebs-lvm-node -o yaml
20
+
...
21
+
env:
22
+
- name: OPENEBS_NODE_ID
23
+
valueFrom:
24
+
fieldRef:
25
+
fieldPath: spec.nodeName
26
+
- name: OPENEBS_CSI_ENDPOINT
27
+
value: unix:///plugin/csi.sock
28
+
- name: OPENEBS_NODE_DRIVER
29
+
value: agent
30
+
- name: LVM_NAMESPACE
31
+
value: openebs
32
+
- name: ALLOWED_TOPOLOGIES
33
+
value: "openebs.io/rack"
34
+
13
35
```
14
36
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.
15
37
16
-
Once we have labeled the node, we can install the lvm driver. The driver will pick the node labels and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can label the node with the topology information and then restart the LVM-LocalPV CSI driver daemon sets (openebs-lvm-node) are required so that the driver can pick the labels and add them as supported topology keys. We should restart the pod in kube-system namespace with the name as openebs-lvm-node-[xxxxx] which is the node agent pod for the LVM-LocalPV Driver.
38
+
Once we have labeled the node, we can install the lvm driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LVM-LocalPV CSI driver daemon sets (openebs-lvm-node).
17
39
18
-
Note that restart of LVM LocalPV CSI driver daemon sets are must in case, if we are going to use WaitForFirstConsumer as volumeBindingMode in storage class. In case of immediate volume binding mode, restart of daemon set is not a must requirement, irrespective of sequence of labelling the node either prior to install lvm driver or after install. However it is recommended to restart the daemon set if we are labeling the nodes after the installation.
19
40
20
41
```sh
21
42
$ kubectl get pods -n kube-system -l role=openebs-lvm
@@ -49,12 +70,7 @@ spec:
49
70
- name: local.csi.openebs.io
50
71
nodeID: k8s-node-1
51
72
topologyKeys:
52
-
- beta.kubernetes.io/arch
53
-
- beta.kubernetes.io/os
54
-
- kubernetes.io/arch
55
-
- kubernetes.io/hostname
56
-
- kubernetes.io/os
57
-
- node-role.kubernetes.io/worker
73
+
- openebs.io/nodename
58
74
- openebs.io/rack
59
75
```
60
76
@@ -78,4 +94,4 @@ allowedTopologies:
78
94
79
95
The LVM LocalPV CSI driver will schedule the PV to the nodes where label "openebs.io/rack" is set to "rack1".
80
96
81
-
Note that if storageclass is using Immediate binding mode and topology key is not mentioned then all the nodes should be labeled using same key, that means, same key should be present on all nodes, nodes can have different values for those keys. If nodes are labeled with different keys i.e. some nodes are having different keys, then LVMPV's default scheduler can not effectively do the volume capacity based scheduling. Here, in this case the CSI provisioner will pick keys from any random node and then prepare the preferred topology list using the nodes which has those keys defined and LVMPV scheduler will schedule the PV among those nodes only.
97
+
Note that if storageclass is using Immediate binding mode and storageclass allowedTopologies is not mentioned then all the nodes should be labeled using "ALLOWED_TOPOLOGIES" keys, that means, "ALLOWED_TOPOLOGIES" keys should be present on all nodes, nodes can have different values for those keys. If some nodes don't have those keys, then LVMPV's default scheduler can not effectively do the volume capacity based scheduling. Here, in this case the CSI provisioner will pick keys from any random node and then prepare the preferred topology list using the nodes which has those keys defined and LVMPV scheduler will schedule the PV among those nodes only.
Copy file name to clipboardExpand all lines: docs/storageclasses.md
+52-7Lines changed: 52 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -314,6 +314,57 @@ allowedTopologies:
314
314
- node-2
315
315
```
316
316
317
+
At the same time, you must set env variables in the LVM-LocalPV CSI driver daemon sets (openebs-lvm-node) so that it can pick the node label as the supported topology. It add "openebs.io/nodename" as default topology key. If the key doesn't exist in the node labels when the CSI LVM driver register, the key will not add to the topologyKeys. Set more than one keys separated by commas.
318
+
319
+
```yaml
320
+
env:
321
+
- name: OPENEBS_NODE_ID
322
+
valueFrom:
323
+
fieldRef:
324
+
fieldPath: spec.nodeName
325
+
- name: OPENEBS_CSI_ENDPOINT
326
+
value: unix:///plugin/csi.sock
327
+
- name: OPENEBS_NODE_DRIVER
328
+
value: agent
329
+
- name: LVM_NAMESPACE
330
+
value: openebs
331
+
- name: ALLOWED_TOPOLOGIES
332
+
value: "test1,test2"
333
+
```
334
+
335
+
We can verify that key has been registered successfully with the LVM LocalPV CSI Driver by checking the CSI node object yaml :-
If you want to change topology keys, just set new env(ALLOWED_TOPOLOGIES) .Check [faq](./faq.md#1-how-to-add-custom-topology-key) for more details.
363
+
364
+
```
365
+
$ kubectl edit ds -n kube-system openebs-lvm-node
366
+
```
367
+
317
368
Here we can have volume group of name “lvmvg” created on the nvme disks and want to use this high performing LVM volume group for the applications that need higher IOPS. We can use the above SorageClass to create the PVC and deploy the application using that.
318
369
319
370
The LVM-LocalPV driver will create the Volume in the volume group “lvmvg” present on the node with fewer of volumes provisioned among the given node list. In the above StorageClass, if there provisioned volumes on node-1 are less, it will create the volume on node-1 only. Alternatively, we can use `volumeBindingMode: WaitForFirstConsumer` to let the k8s select the node where the volume should be provisioned.
Now, restart the LVM-LocalPV Driver (if already deployed, otherwise please ignore) so that it can pick the new node label as the supported topology. Check [faq](./faq.md#1-how-to-add-custom-topology-key) for more details.
331
-
332
-
```
333
-
$ kubectl delete po -n kube-system -l role=openebs-lvm
334
-
```
335
-
336
-
Now, we can create the StorageClass like this:
381
+
Add "openebs.io/lvmvg" to the LVM-LocalPV CSI driver daemon sets env(ALLOWED_TOPOLOGIES). Now, we can create the StorageClass like this:
0 commit comments