Skip to content

Commit 0c8c92d

Browse files
claudiolorcheina97
authored andcommitted
docs: add liqoctl info documentation
1 parent 20d9d8f commit 0c8c92d

File tree

9 files changed

+181
-25
lines changed

9 files changed

+181
-25
lines changed

docs/advanced/peering/inter-cluster-authentication.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -259,20 +259,20 @@ Once the Identity resource is correctly applied, the clusters are able to negoti
259259
All in all, these are the steps to be followed by the administrators of each of the clusters to manually complete the authentication process:
260260

261261
1. **Cluster provider**: creates the nonce to be provided to the **cluster consumer** administrator:
262-
262+
263263
```bash
264264
liqoctl create nonce --remote-cluster-id $CLUSTER_CONSUMER_ID
265265
liqoctl get nonce --remote-cluster-id $CLUSTER_CONSUMER_ID > nonce.txt
266266
```
267267

268268
2. **Cluster consumer**: generates the `Tenant` resource to be applied by the **cluster provider**:
269-
269+
270270
```bash
271271
liqoctl generate tenant --remote-cluster-id $CLUSTER_PROVIDER_ID --nonce $(cat nonce.txt) > tenant.yaml
272272
```
273273

274274
3. **Cluster provider**: applies `tenant.yaml` and generates the `Identity` resource to be applied by the consumer:
275-
275+
276276
```bash
277277
kubectl apply -f tenant.yaml
278278
liqoctl generate identity --remote-cluster-id $CLUSTER_CONSUMER_ID > identity.yaml
@@ -283,3 +283,5 @@ All in all, these are the steps to be followed by the administrators of each of
283283
```bash
284284
kubectl apply -f identity.yaml
285285
```
286+
287+
You can check whether the procedure completed successfully by checking [the peering status](../../usage/peer.md#check-status-of-peerings).

docs/advanced/peering/inter-cluster-network.md

Lines changed: 27 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -227,14 +227,23 @@ spec:
227227
pod: 10.243.0.0/16 # the pod CIDR of the remote cluster
228228
```
229229
230-
You can find *REMOTE_CLUSTER_ID* these parameters in the output of the
230+
You can find the value of the *REMOTE_CLUSTER_ID* by launching the following command on the **remote cluster**:
231+
232+
`````{tab-set}
233+
````{tab-item} liqoctl
234+
235+
```bash
236+
liqoctl info --get clusterid
237+
```
238+
````
239+
````{tab-item} kubectl
231240

232241
```bash
233242
kubectl get configmaps -n liqo liqo-clusterid-configmap \
234243
--template {{.data.CLUSTER_ID}}
235244
```
236-
237-
command in the remote cluster.
245+
````
246+
`````
238247

239248
```{admonition} Tip
240249
You can generate this file with the command `liqoctl generate configuration` executed in the remote cluster.
@@ -291,13 +300,25 @@ NAMESPACE NAME TEMPLATE NAME IP PORT AGE
291300
default server wireguard-server 10.42.3.54 32133 84s
292301
```
293302
303+
`````{tab-set}
304+
````{tab-item} liqoctl
305+
294306
```bash
295-
kubectl get gatewayservers --template {{.status.endpoint}}
307+
liqoctl info peer <REMOTE_CLUSTER_ID> --get network.gateway
308+
```
309+
````
310+
311+
````{tab-item} kubectl
312+
313+
```bash
314+
kubectl get gatewayservers --template {{.status.endpoint}} -n <GATEWAY_NS> <GATEWAY_NAME>
296315
```
297316
298317
```text
299318
map[addresses:[172.19.0.9] port:32701 protocol:UDP]
300319
```
320+
````
321+
`````
301322
302323
#### Creation of a gateway client
303324
@@ -475,6 +496,8 @@ Resuming, these are the steps to be followed by the administrators of each of th
475496
kubectl apply -f publickey-client.yaml
476497
```
477498
499+
You can check whether the procedure completed successfully by checking [the peering status](../../usage/peer.md#check-status-of-peerings).
500+
478501
## Custom templates
479502
480503
Gateway resources (i.e., `GatewayServer` and `GatewayClient`) contain a reference to the template CR implementing the inter-cluster network technology.

docs/advanced/peering/offloading-in-depth.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ To add other resources like `ephemeral-storage`, `gpu` or any other custom resou
6868
:caption: "Cluster consumer"
6969
kubectl get resourceslices.authentication.liqo.io -A
7070
```
71-
71+
7272
```text
7373
NAMESPACE NAME AUTHENTICATION RESOURCES AGE
7474
liqo-tenant-cool-firefly mypool Accepted Accepted 19s
@@ -80,7 +80,7 @@ At the same time, in the **provider cluster**, a `Quota` will be created to limi
8080
:caption: "Cluster provider"
8181
kubectl get quotas.offloading.liqo.io -A
8282
```
83-
83+
8484
```text
8585
NAMESPACE NAME ENFORCEMENT CORDONED AGE
8686
liqo-tenant-wispy-firefly mypool-c34af51dd912 None 36s
@@ -92,7 +92,7 @@ After a few seconds, in the **consumer cluster**, a new `VirtualNode` will be cr
9292
:caption: "Cluster consumer"
9393
kubectl get virtualnodes.offloading.liqo.io -A
9494
```
95-
95+
9696
```text
9797
NAMESPACE NAME CLUSTERID CREATE NODE AGE
9898
liqo-tenant-cool-firefly mypool cool-firefly true 59s
@@ -104,7 +104,7 @@ A new `Node` will be available in the consumer cluster with the name `mypool` pr
104104
:caption: "Cluster consumer"
105105
kubectl get node
106106
```
107-
107+
108108
```text
109109
NAME STATUS ROLES AGE VERSION
110110
cluster-1-control-plane-fsvkj Ready control-plane 30m v1.27.4
@@ -177,7 +177,7 @@ This command will create a `VirtualNode` named `mynode` in the consumer cluster,
177177
:caption: "Cluster consumer"
178178
kubectl get virtualnodes.offloading.liqo.io -A
179179
```
180-
180+
181181
```text
182182
NAMESPACE NAME CLUSTERID CREATE NODE AGE
183183
liqo-tenant-cool-firefly mynode cool-firefly true 7s
@@ -189,7 +189,7 @@ A new `Node` will be available in the consumer cluster with the name `mynode` pr
189189
:caption: "Cluster consumer"
190190
kubectl get node
191191
```
192-
192+
193193
```text
194194
NAME STATUS ROLES AGE VERSION
195195
cluster-1-control-plane-fsvkj Ready control-plane 52m v1.27.4
@@ -258,6 +258,10 @@ metadata:
258258
type: Opaque
259259
```
260260
261+
### Check shared resources and virtual nodes
262+
263+
Via `liqoctl` it is possible to check the amount of shared resources and the virtual nodes configured for a specific peerings looking at [the peering status](../../usage/peer.md#check-status-of-peerings).
264+
261265
### Delete VirtualNode
262266

263267
You can revert the process by deleting the `VirtualNode` in the consumer cluster.

docs/conf.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,8 @@
4545
myst_enable_extensions = [
4646
"substitution",
4747
]
48+
# Enable slug generation for headings to reference them in markdown links
49+
myst_heading_anchors = 3
4850

4951
# Add any paths that contain templates here, relative to this directory.
5052
templates_path = ['_templates']
@@ -208,7 +210,7 @@ def generate_liqoctl_install(platform: str, arch: str) -> str:
208210
curl --fail -LS \"{file}\" | tar -xz\n\
209211
sudo install -o root -g root -m 0755 liqoctl /usr/local/bin/liqoctl\n\
210212
```\n"
211-
213+
212214
def generate_helm_install() -> str:
213215
version=generate_semantic_version()
214216
return f"```bash\n\

docs/examples/quick-start.md

Lines changed: 59 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -122,6 +122,28 @@ liqo-proxy-599958d9b8-6fzfc 1/1 Running 0 8m15s
122122
liqo-webhook-8fbd8c664-pxrfh 1/1 Running 0 8m15s
123123
```
124124

125+
At this point, it is possible to check status and info about the current Liqo instance, runnning:
126+
127+
```bash
128+
liqoctl info
129+
```
130+
131+
```text
132+
─ Local installation info ────────────────────────────────────────────────────────
133+
Cluster ID: milan
134+
Version: v1.0.0-rc.2
135+
K8s API server: https://172.19.0.10:6443
136+
Cluster labels
137+
liqo.io/provider: kind
138+
──────────────────────────────────────────────────────────────────────────────────
139+
─ Installation health ────────────────────────────────────────────────────────────
140+
✔ Liqo is healthy
141+
──────────────────────────────────────────────────────────────────────────────────
142+
─ Active peerings ────────────────────────────────────────────────────────────────
143+
144+
──────────────────────────────────────────────────────────────────────────────────
145+
```
146+
125147
## Peer two clusters
126148

127149
Once Liqo is installed in your clusters, you can establish new *peerings*.
@@ -182,17 +204,51 @@ The output should look like this:
182204
You can check the peering status by running:
183205

184206
```bash
185-
kubectl get foreignclusters
207+
liqoctl info
186208
```
187209

188-
The output should look like the following, indicating the relationship the foreign cluster has with the local cluster:
210+
Where in the output you should be able to see that a new peer appeared in the "Active peerings" section:
211+
212+
```text
213+
─ Local installation info ────────────────────────────────────────────────────────
214+
Cluster ID: rome
215+
Version: v1.0.0-rc.2
216+
K8s API server: https://172.19.0.9:6443
217+
Cluster labels
218+
liqo.io/provider: kind
219+
──────────────────────────────────────────────────────────────────────────────────
220+
─ Installation health ────────────────────────────────────────────────────────────
221+
✔ Liqo is healthy
222+
──────────────────────────────────────────────────────────────────────────────────
223+
─ Active peerings ────────────────────────────────────────────────────────────────
224+
milan
225+
Role: Provider
226+
Networking status: Healthy
227+
Authentication status: Healthy
228+
Offloading status: Healthy
229+
──────────────────────────────────────────────────────────────────────────────────
230+
```
231+
232+
````{admonition} Tip
233+
To get additional info about the specific peering you can run:
234+
235+
```bash
236+
liqoctl info peer milan
237+
```
238+
````
239+
240+
Additionally, you should be able to see a new CR describing the relationship with the foreign cluster:
241+
242+
```bash
243+
kubectl get foreignclusters
244+
```
189245

190246
```text
191247
NAME ROLE AGE
192248
milan Provider 52s
193249
```
194250

195-
At the same time, you should see a virtual node (`milan`) in addition to your physical nodes:
251+
Moreover, you should be able to see a new virtual node (`milan`) among the list of nodes in the cluster:
196252

197253
```bash
198254
kubectl get nodes

docs/examples/service-offloading.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -111,14 +111,14 @@ Let's now consume the Service from both clusters from a different pod (e.g., a t
111111
Starting from the *London* cluster:
112112

113113
```bash
114-
kubectl run consumer --image=curlimages/curl --rm --restart=Never \
114+
kubectl run consumer -it --image=curlimages/curl --rm --restart=Never \
115115
-- curl -s -H 'accept: application/json' http://flights-service.liqo-demo:7999/schedule
116116
```
117117

118118
A similar result is obtained executing the same command in a shell running in the *New York* cluster, although the backend pod is effectively running in the *London* cluster:
119119

120120
```bash
121-
kubectl run consumer --image=curlimages/curl --rm --restart=Never \
121+
kubectl run consumer -it --image=curlimages/curl --rm --restart=Never \
122122
--kubeconfig $KUBECONFIG_NEWYORK \
123123
-- curl -s -H 'accept: application/json' http://flights-service.liqo-demo:7999/schedule
124124
```

docs/features/network-fabric.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ The figure below represents at a high level the network fabric established betwe
1515
The **controller-manager** (not shown in the figure) contains the **control plane** of the Liqo network fabric.
1616
It runs as a pod (**liqo-controller-manager**) and is responsible for **setting up the network CRDs** during the connection process to a remote cluster.
1717
This includes the management of potential **network conflicts** through the definition of high-level NAT rules (enforced by the data plane components).
18-
Specifically, network CRDs are used to handle the [Translation of Pod IPs] (usageReflectionPods) (i.e. during the synchronisation process from the remote to the local cluster), as well as during the [EndpointSlices reflection] (usageReflectionEndpointSlices) (i.e. propagation from the local to the remote cluster).
18+
Specifically, network CRDs are used to handle the [Translation of Pod IPs](usageReflectionPods) (i.e. during the synchronisation process from the remote to the local cluster), as well as during the [EndpointSlices reflection](usageReflectionEndpointSlices) (i.e. propagation from the local to the remote cluster).
1919

2020
An **IP Address Management (IPAM) plugin** is included in another pod (**liqo-ipam**).
2121
It exposes an interface that is consumed by the **controller-manager** to handle **IPs acquisitions**.

docs/installation/install.md

Lines changed: 32 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ If the private cluster uses private link, you can set the `--private-link` *liqo
9797
9898
```{admonition} Virtual Network Resource Group
9999
By default, it is assumed the Virtual Network Resource for the AKS Subnet is located in the same Resource Group
100-
as the AKS Resource. If that is not the case, you will need to use the `--vnet-resource-group-name` flag to provide the
100+
as the AKS Resource. If that is not the case, you will need to use the `--vnet-resource-group-name` flag to provide the
101101
correct Resource Group name where the Virtual Network Resource is located.
102102
```
103103
````
@@ -201,7 +201,7 @@ Liqo supports GKE clusters using the default CNI: [Google GKE - VPC-Native](http
201201
Liqo does NOT support:
202202
203203
* GKE Autopilot Clusters
204-
* Intranode visibility: make sure this option is disabled or use the `--no-enable-intra-node-visibility` flag.
204+
* Intranode visibility: make sure this option is disabled or use the `--no-enable-intra-node-visibility` flag.
205205
* Accessing offloaded pods from NodePort/LoadBalancer services [**only on Dataplane V2**].
206206
```
207207
@@ -484,6 +484,36 @@ liqoctl install <provider> --version <commit-sha> --local-chart-path <path-to-lo
484484

485485
(InstallationCNIConfiguration)=
486486

487+
## Check installation
488+
489+
After the installation, you can check the status and info about the current instance of Liqo via `liqoctl`:
490+
491+
```bash
492+
liqoctl info
493+
```
494+
495+
The info command is a good way to check the:
496+
497+
* Health of the installation
498+
* Current configuration
499+
* Status and info about active peerings
500+
501+
By default the output is presented in a human-readable form.
502+
However, to simplify automate retrieval of the data, via the `-o` option it is possible to format the output in **JSON or YAML format**.
503+
Moreover via the `--get field.subfield` argument, each field of the reports can be individually retrieved.
504+
505+
For example:
506+
507+
```{code-block} bash
508+
:caption: Get the output in JSON format
509+
liqoctl info -o json
510+
```
511+
512+
```{code-block} bash
513+
:caption: Get the podCIDR of the local Liqo instance
514+
liqoctl info --get network.podcidr
515+
```
516+
487517
## CNIs
488518

489519
### Cilium

0 commit comments

Comments
 (0)