Skip to content

Commit f4b8613

Browse files
committed
Address frisso comments
1 parent acdd4bd commit f4b8613

File tree

1 file changed

+74
-51
lines changed

1 file changed

+74
-51
lines changed

docs/advanced/peering/peering-via-cr.md

Lines changed: 74 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Declarative peerings are supported starting from Liqo 1.0, which means that you can create a set of CRs describing the peering with a remote cluster, which is automatically set up once the CRs are applied on both sides.
44
This simplifies automation, GitOps and continuous delivery: for example, you might have your Git repository with the manifests describing the peerings, and an instance of [ArgoCD](https://argo-cd.readthedocs.io) which synchronizes the changes on the clusters, creating and destroying the peerings.
55

6-
The following lines analizes how to describe the configuration of each of the Liqo modules:
6+
This documentation page analizes how to declaratively configure each of the Liqo modules:
77

88
- Networking
99
- Authentication
@@ -17,13 +17,31 @@ This tenant namespace must **refer to the peering with a specific cluster**, hen
1717
A tenant namespace **can have an arbitrary name**, but **it must have the following labels**:
1818

1919
```text
20-
liqo.io/remote-cluster-id: <HERE_THE_CLUSTER_ID_OF_PEER_CLUSTER>
20+
liqo.io/remote-cluster-id: <PEER_CLUSTER_ID>
2121
liqo.io/tenant-namespace: "true"
2222
```
2323

24-
### Configuring the tenant namespace on cluster consumer
24+
Where `PEER_CLUSTER_ID` is the cluster id of the peer cluster defined at installation time, it can be obtained by launching the following command on the **remote cluster**:
25+
26+
`````{tab-set}
27+
````{tab-item} liqoctl
28+
29+
```bash
30+
liqoctl info --get clusterid
31+
```
32+
````
33+
````{tab-item} kubectl
34+
35+
```bash
36+
kubectl get configmaps -n liqo liqo-clusterid-configmap \
37+
--template {{.data.CLUSTER_ID}}
38+
```
39+
````
40+
`````
41+
42+
### Configuring the tenant namespace on consumer cluster
2543

26-
The following is an example of tenant namespace referring to the peering with a cluster with id `cl-provider`:
44+
The following is an example of tenant namespace named `liqo-tenant-cl-provider`, which refers to the peering with a cluster with id `cl-provider`:
2745

2846
```yaml
2947
apiVersion: v1
@@ -36,10 +54,9 @@ metadata:
3654
spec: {}
3755
```
3856
39-
### Configuring the tenant namespace on the cluster provider
57+
### Configuring the tenant namespace on the provider cluster
4058
41-
**You will need a similar namespace on the provider cluster** pointing to the cluster ID of your local cluster (cluster consumer).
42-
For example, if the cluster consumer has cluster ID `cl-consumer`, its tenant namespace might look like the following:
59+
The following is an example of tenant namespace on the provider cluster, named `liqo-tenant-cl-consumer`, which refers to the peering with the consumer cluster with id `cl-consumer`:
4360

4461
```yaml
4562
apiVersion: v1
@@ -52,15 +69,18 @@ metadata:
5269
spec: {}
5370
```
5471

72+
```{admonition} Note
73+
When peering is configured, the names of the tenant namespaces of the provider and consumer clusters do not have to be equal or follow any pattern.
74+
```
75+
5576
## Declarative network configuration
5677

5778
By default, the network connection between clusters is established using a secure channel created via [Wireguard](https://www.wireguard.com/).
58-
To allow the creation of the tunnel, one cluster, typically the provider, needs to host a server gateway that exposes a port for the client gateway (usually running on the consumer cluster) to connect to.
59-
60-
Therefore, it is required that the server gateway is reachable from the peer cluster, to enable the creation of the tunnel between the two clusters.
79+
In this case, one cluster (usually the provider) needs to host a server gateway that exposes a UDP port that must be reachable from the client gateway (usually running on the consumer cluster).
6180

6281
In this guide, **we will configure the client gateway on the consumer cluster and the server gateway on the provider cluster**, which is the most common setup.
63-
However, depending on your requirements, you may choose to reverse these roles.
82+
83+
However, given that the setup of the network peering is independent from the offloading role of the cluster (i.e., consumer vs. provider), you may choose to invert the client/server roles in case this is more convenient for your setup.
6484

6585
### Creating and exchanging the network configurations (both clusters)
6686

@@ -97,7 +117,7 @@ echo "Private key:"; cat private.der | tail -c 32 | base64
97117
echo "Public key:"; cat public.der | tail -c 32 | base64
98118
```
99119

100-
At this point, **in each cluster**, we will need to create a secret containing this pair of keys:
120+
At this point, you need to create **in each cluster** a secret containing this pair of keys:
101121

102122
```{code} yaml
103123
apiVersion: v1
@@ -137,19 +157,19 @@ liqo.io/remote-cluster-id: <HERE_THE_CLUSTER_ID_OF_PEER_CLUSTER>
137157
networking.liqo.io/gateway-resource: "true"
138158
```
139159

140-
### Configuring the server gateway (cluster provider)
160+
### Configuring the server gateway (provider cluster)
141161

142-
As mentioned before, a Wireguard tunnel connects the clusters peered with Liqo.
143-
In this section we are going to configure the gateway server, where the client will connect to.
162+
By default, a Wireguard tunnel connects the clusters peered with Liqo.
163+
This section shows how to configure the gateway server, where the client will connect to.
144164

145165
The `GatewayServer` resource describes the configuration of the gateway server, and should be applied on the cluster acting as server for the tunnel creation.
146166
You can check [here](./inter-cluster-network.md#creation-of-a-gateway-server) an example of the `GatewayServer` CR.
147167

148168
When you create the `GatewayServer` resource, **make sure to specify the `secretRef`** pointing to the key pairs we created before.
149169

150-
Note that under `.spec.endpoint` of the `GatewayServer` resource you can configure a fixed `nodePort` or `loadBalancerIP` (if supported by your provider) to have a deterministic port or ip address for the gateway, so that the configuration of the client that connects to it can be defined in advance.
170+
Note that under `.spec.endpoint` of the `GatewayServer` resource you can configure a fixed `nodePort` or `loadBalancerIP` (if supported by your provider) to have a precise UDP port or IP address for the gateway, so that the configuration of the client that connects to it can be defined in advance.
151171

152-
The following is an example of the `GatewayServer` resource, configured in the cluster provider, exposing the gateway using `NodePort` on port `30742`:
172+
The following is an example of the `GatewayServer` resource, configured in the provider cluster, exposing the gateway using `NodePort` on port `30742`:
153173

154174
```{code} yaml
155175
apiVersion: networking.liqo.io/v1beta1
@@ -177,12 +197,12 @@ spec:
177197
Where:
178198

179199
- `CONSUMER_CLUSTER_ID` is the cluster ID of the consumer, where the gateway client runs;
180-
- `PROVIDER_TENANT_NAMESPACE` is the tenant namespace on the cluster provider, where, in thie case, we are configuring the gateway server;
181-
- `WIREGUARD_KEYS_SECRET_NAME` is the name of the secret with the Wireguard key paris we created before.
200+
- `PROVIDER_TENANT_NAMESPACE` is the tenant namespace on the provider cluster, where, in this case, we are configuring the gateway server;
201+
- `WIREGUARD_KEYS_SECRET_NAME` is the name of the secret with the Wireguard key pairs we created before.
182202

183-
### Configuring the client gateway (cluster consumer)
203+
### Configuring the client gateway (consumer cluster)
184204

185-
The other cluster, in this case the consumer, needs to run the client gateway connecting to the service exposed by the cluster provider.
205+
The other cluster, in this case the consumer, needs to run the client gateway connecting to the service exposed by the provider cluster.
186206

187207
The `GatewayClient` resource describes the configuration of the gateway client, **it should contain the parameters** to connect to service exposed by the gateway server.
188208

@@ -216,9 +236,9 @@ spec:
216236
Where:
217237

218238
- `PROVIDER_CLUSTER_ID` is the cluster ID of the provider, where the gateway server is running;
219-
- `CONSUMER_TENANT_NAMESPACE` is the tenant namespace on the cluster consumer, where, in thie case, we are configuring the gateway client;
220-
- `WIREGUARD_KEYS_SECRET_NAME` is the name of the secret with the Wireguard key paris we created before;
221-
- `REMOTE_IP`: is the IP address of one of the nodes of the cluster provider, as we configured a `NodePort` service. If the service was a `LoadBalancer` the ip would be the one of the load balancer.
239+
- `CONSUMER_TENANT_NAMESPACE` is the tenant namespace on the consumer cluster, where, in this case, we are configuring the gateway client;
240+
- `WIREGUARD_KEYS_SECRET_NAME` is the name of the secret with the Wireguard key pairs we created before;
241+
- `REMOTE_IP`: is the IP address of one of the nodes of the provider cluster, as we configured a `NodePort` service. If the service was a `LoadBalancer` the IP would be the one of the load balancer.
222242

223243
### Summary of network configuration
224244

@@ -228,19 +248,20 @@ To sum up, to set up the network, **both clusters need**:
228248
- a `Secret` containing the Wireguard public and private keys
229249
- a `PublicKey` with the Wireguard public key **of the peer cluster**
230250

231-
The **cluster provider will have a `GatewayServer`** resource
251+
In addition:
232252

233-
The **cluster consumer a `GatewayClient` resource** connecting to the peer Gateway server.
253+
- the **provider cluster will have a `GatewayServer`** resource
254+
- the **consumer cluster a `GatewayClient` resource** connecting to the peer Gateway server.
234255

235256
Once you applied all the required resources, the client should be able to connect to the server and create the tunnel.
236257

237258
You can get the `Connection` resource to check the status of the tunnel, as shown [here](./inter-cluster-network.md#connection-crds).
238259

239260
## Declarative configuration of clusters authentication
240261

241-
In this section we are going to configure the authentication between the clusters, allowing the cluster consumer to ask for resources.
262+
This section shows how to configure the authentication between the clusters, allowing the consumer cluster to ask for resources.
242263

243-
When authentication is manually configured, **the user is in charge of providing the credentials with the permission required** by the cluster consumer to operate.
264+
When authentication is manually configured, **the user is in charge of providing the credentials with the permission required** by the consumer cluster to operate.
244265

245266
If your are not familiar with how authentication works in Kubernetes you can check [this documentation page](https://kubernetes.io/docs/reference/access-authn-authz/authentication/).
246267
You can also check [here to know how to issue a certificate for a user](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#normal-user).
@@ -250,13 +271,13 @@ Note that with EKS authentication via client certificate [is not directly suppor
250271
You can check [here](https://docs.aws.amazon.com/eks/latest/userguide/cluster-auth.html) how access control works in eks.
251272
```
252273

253-
### Cluster consumer role binding (cluster provider)
274+
### consumer cluster role binding (provider cluster)
254275

255-
Once we created the credentials the consumer can work with, we will need to provide the minimum permission required by the consumer to operate.
256-
Note that **the cluster consumer will never directly create workloads on the remote cluster** and, at this stage, it should have only the permissions to create the liqo resources to ask for the approval of a `ResourceSlice`.
276+
Once we created (in the provider cluster) the credentials the consumer can work with, we need to provide the minimum permission required by the consumer to operate.
277+
Note that **the consumer cluster will never directly create workloads on the remote cluster** and, at this stage, it should have only the permissions to create the liqo resources to ask for the approval of a `ResourceSlice`.
257278

258-
To do so, we will need to bind the newly created user to the `liqo-remote-controlpane` role.
259-
This can be done **by creating the following `RoleBinding` resource in the tenant namespace of the cluster provider**:
279+
To do so, you need to bind the newly created user to the `liqo-remote-controlpane` role.
280+
This can be done **by creating the following `RoleBinding` resource in the tenant namespace of the provider cluster**:
260281

261282
```{code} yaml
262283
apiVersion: rbac.authorization.k8s.io/v1
@@ -276,14 +297,14 @@ subjects:
276297
name: <USER_COMMON_NAME>
277298
```
278299

279-
Where, when the user authenticate via certificate signed by the cluster CA, `USER_COMMON_NAME` is the CN field of the certificate.
300+
where, when the user authenticates via a certificate signed by the cluster CA, `USER_COMMON_NAME` is the `CN` field of the certificate.
280301

281-
### Creation of a tenant for the cluster consumer (cluster provider)
302+
### Creation of a tenant for the consumer cluster (provider cluster)
282303

283304
On the provider side, to allow the authentication of a consumer, we will need to create a `Tenant` resource for it.
284-
This resource is useful to control the remote consumer (e.g. if we the provider wants to prevent a remote consumer to negotiate more resources, it can set the tenant condition to `Cordoned`, stopping any other resources negotiation).
305+
This resource is useful to control the remote consumer (e.g. if the provider would like to prevent a remote consumer to negotiate more resources, it can set the tenant condition to `Cordoned`, stopping any other resources negotiation).
285306

286-
Note that, in the case of declarative configuration, there won't be any handshake between the clusters, so we will need to configure the tenant so that it accepts the `ResourceSlice` of the given consumer, even though no handshake occurred.
307+
Note that, in the case of declarative configuration, there will not be any handshake between the clusters, so we will need to configure the tenant so that it accepts the `ResourceSlice` of the given consumer, even though no handshake occurred.
287308
This can be done setting `TolerateNoHandshake` as `authzPolicy` like in the following example:
288309

289310
```{code} yaml
@@ -300,10 +321,10 @@ spec:
300321
tenantCondition: Active
301322
```
302323

303-
### Add the credentials on the cluster consumer (cluster consumer)
324+
### Add the credentials on the consumer cluster (consumer cluster)
304325

305-
The previously created credentials on the cluster provider, should be provided to the cluster consumer.
306-
To do so, you should create a `Secret` containing the kubeconfig with the credentials to operate on the cluster provider, having the following labels:
326+
The previously created credentials on the provider cluster should be given to the consumer cluster.
327+
To do so, you should create a `Secret` containing the kubeconfig with the credentials to operate on the provider cluster, having the following labels:
307328

308329
```{code} yaml
309330
liqo.io/identity-type: ControlPlane
@@ -316,7 +337,7 @@ and annotation:
316337
liqo.io/remote-tenant-namespace: <PROVIDER_TENANT_NAMESPACE>
317338
```
318339

319-
Where the `PROVIDER_TENANT_NAMESPACE` is the tenant namespace that we created on the cluster provider for the peering with this consumer.
340+
Where the `PROVIDER_TENANT_NAMESPACE` is the tenant namespace that we created on the provider cluster for the peering with this consumer.
320341

321342
The following is an example of identity secret:
322343

@@ -351,23 +372,25 @@ I1107 10:05:29.355741 1 reflector.go:163] [cl02] Reflection of authenticat
351372

352373
### Summary of authentication configuration
353374

354-
To sum up, to set up the authentication, **on the cluster provider** you will need to:
375+
To sum up, to set up the authentication, **on the provider cluster** you will need to:
355376

356377
- Create the credentials to be used by the consumer
357378
- Bind the credentials to the `liqo-remote-controlplane` role
358379
- Create a `Tenant` resource for the consumer
359380

360381
While, **on the consumer side**:
361382

362-
- Create a new secret containing the kubeconfig with the credentials to access to the cluster provider
383+
- Create a new secret containing the kubeconfig with the credentials to access to the provider cluster
363384

364385
## Declarative configuration of namespace offloading
365386

366387
While offloading is independent from the network, which means that it is possible to negotiate resources and configure a namespace offloading without the inter-cluster network enabled, **a [working authentication configuration](#declarative-configuration-of-clusters-authentication) is a pre-requisite to enable offloading**.
367388

368-
### Ask for resources: configure a ResourceSlice
389+
### Ask for resources: configure a ResourceSlice (consumer cluster)
390+
391+
The `ResourceSlice` resource is the CR that defines the computational resources requested by the consumer to the provider cluster.
392+
It should be created on the tenant namespace of the consumer cluster, and it is automatically forwarded to the provider cluster, which can accept or reject it.
369393

370-
The `ResourceSlice` resource is the one to be created to ask a cluster provider for resources.
371394
The following is an example of `ResourceSlice`:
372395

373396
```{code} yaml
@@ -381,7 +404,7 @@ metadata:
381404
liqo.io/remote-cluster-id: <PROVIDER_CLUSTER_ID>
382405
liqo.io/remoteID: <PROVIDER_CLUSTER_ID>
383406
name: test
384-
namespace: liqo-tenant-cl02
407+
namespace: <CONSUMER_TENANT_NAMESPACE>
385408
spec:
386409
class: default
387410
providerClusterID: <PROVIDER_CLUSTER_ID>
@@ -390,7 +413,7 @@ spec:
390413
ram 128Gi
391414
```
392415

393-
With the configuration above, once the resources are accepted from the provider side, a new (virtual) node, impersonating the provider cluster, will make available the requested resorces on the consumer cluster.
416+
If the request above is successfully accepted by the provider, a new (virtual) node, impersonating the provider cluster, will make available the requested resorces on the consumer cluster.
394417

395418
To know more about `ResourceSlice` and `VirtualNode` check [this section of the documentation](./offloading-in-depth.md#create-resourceslice).
396419

@@ -412,17 +435,17 @@ spec:
412435
podOffloadingStrategy: LocalAndRemote
413436
```
414437

415-
The `NamespaceOffloading` resource should be created in the namespace that we want to extend on the remote clusters.
438+
The `NamespaceOffloading` resource should be created in the namespace that we would like to extend on the remote clusters.
416439

417440
For example, the resource above, extends the `demo` namespace on all the configured provider clusters.
418-
As the policy is `LocalAndRemote` the pod will be executed locally or remotely and it depends on the choice made by the the Kubernetes scheduler.
441+
Since the `podOffloadingStrategy` policy is `LocalAndRemote`, the a new pod could be executed either locally or remotely depending on the choice made by the vanilla Kubernetes scheduler (e.g., if the remote virtual node is plenty of free resources, it may be preferred against local nodes that are already used by other pods).
419442

420443
[Check here](../../usage/namespace-offloading.md#namespace-mapping-strategy) to know more about namespace offloading.
421444

422445
```{warning}
423-
Currently, **there is a caveat with the namespace offloading preventing the pods created before the creation of the `NamespaceOffloading` resource to be scheduled on a remote cluster**.
446+
Currently, the `NamespaceOffloading` resource **must be created before scheduling a pod on a remote cluster**.
424447

425-
For example, if we configure a pod to run on the remote cluster, but pod is created before the `NamespaceOffloading`, that pod remains in a `Pending` state, also after the namespace is actually offloaded.
448+
For example, if we configure a pod to run on the remote cluster, but the pod is created before setting the `NamespaceOffloading` resource, that pod will remain in a `Pending` state forever, also after the namespace is actually offloaded.
426449

427450
Therefore, **make sure to offload a namespace** before starting scheduling pods on it.
428451
```

0 commit comments

Comments
 (0)