Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 31 additions & 18 deletions docs/installation/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -614,13 +614,13 @@ Those fields and labels have the following meaning:

### Cilium

Liqo creates a new node for each remote cluster, however we do not schedule daemonsets on these nodes.
Liqo creates a new node for each remote cluster, however it does not schedule any daemonset on these nodes.

From version **1.14.2** cilium adds a taint to the nodes where the daemonset is not scheduled, so that pods are not scheduled on them.
From version **1.14.2** Cilium adds a taint to the nodes where the daemonset is not scheduled, so that pods are not scheduled on them.
This taint prevents also Liqo pods from being scheduled on the remote nodes.

To solve this issue we need to specify to cilium daemonsets to ignore the Liqo node.
This can be done by adding the following helm values to the cilium installation:
To solve this issue we need to configure Cilium daemonsets to ignore the Liqo node.
This can be done by adding the following helm values to the Cilium installation:

```yaml
affinity:
Expand All @@ -634,28 +634,41 @@ affinity:

#### Device Configuration

When using **advanced Cilium eBPF features** such as eBPF-based host routing, host firewall, or BPF masquerading, Cilium automatically attaches eBPF programs to all network interfaces it detects.
However, Liqo creates its own network interfaces (e.g., `liqo.*`) that should not be managed by Cilium's eBPF datapath.
In some configurations, Cilium automatically auto-detects all network interfaces for different reasons (see below).
However, this raises a conflict with the interfaces created by Liqo (e.g., `liqo.*`), which must be kept outside Cilium visibility.

```{admonition} Note
This configuration is **not required** if you are using Cilium with default settings.
It is only necessary when enabling advanced eBPF features that attach programs directly to network interfaces.
```
To prevent this, you should use the `devices` parameter in the Cilium Helm values to explicitly configure which network interfaces are visible from Cilium.

For more details about the `devices` parameter, refer to the [Cilium Helm Reference](https://github.com/cilium/cilium/blob/v1.18.4/install/kubernetes/cilium/values.yaml#L854).

Here we list some of the reasons why the Cilium network interface autodiscovery conficts with Liqo.

<!-- markdownlint-disable MD036 -->
**Kube-proxy Replacement and eBPF Features**

When using **kube-proxy replacement** or other **advanced Cilium eBPF features** (such as eBPF-based host routing, host firewall, or BPF masquerading), Cilium automatically attaches eBPF programs to all detected network interfaces.

If the `devices` parameter is not set, Cilium will attach eBPF programs to Liqo interfaces as well.
This can cause packet drops or unexpected behavior, as Cilium's eBPF programs will intercept traffic before it reaches the kernel's network stack where Liqo expects to handle it.

To avoid this, explicitly specify the devices Cilium should manage, excluding Liqo interfaces.
This ensures that Cilium eBPF programs (for NodePort, masquerading, and host firewall) are only attached to the specified devices, leaving Liqo interfaces free to handle cross-cluster traffic.

To prevent conflicts and ensure Liqo traffic is handled correctly when using these advanced features, you should explicitly configure which network interfaces Cilium should manage using the `devices` parameter in the cilium values.yaml file.
**MTU Autodiscovery**

If the `devices` parameter is not set while using advanced eBPF features, Cilium will auto-detect and attach to all interfaces, including Liqo interfaces.
This can cause packet drops or unexpected behavior as Cilium's eBPF programs will intercept traffic before it reaches the kernel's network stack where Liqo expects to handle it.
<!-- markdownlint-enable MD036 -->
When using **MTU autodiscovery** (i.e., `mtu: 0` in Cilium Helm values), Cilium probes all detected interfaces to determine the MTU value to use for pod networking.

This configuration ensures that Cilium eBPF programs (for NodePort, masquerading, and host firewall) are only attached to the specified devices, leaving Liqo interfaces unmanaged and free to handle cross-cluster traffic.
If the `devices` parameter is not set, Cilium may detect Liqo interfaces and use their MTU value.
Since Liqo interfaces may have a different MTU than your primary network interfaces, this can result in an incorrect MTU being applied to all pods in the cluster.

For more details about the `devices` parameter, refer to the [Cilium Helm Reference](https://github.com/cilium/cilium/blob/v1.18.4/install/kubernetes/cilium/values.yaml#L854) and [Host Policies documentation](https://docs.cilium.io/en/stable/security/policy/language/#host-policies).
To avoid this, explicitly specify the devices Cilium should use for MTU detection, excluding Liqo interfaces.

#### Kube-proxy replacement
#### Kube-proxy Replacement Limitations

Liqo networks present a limitation when used with cilium with *kube-proxy replacement*.
In particular you won't be able to expose an **offloaded pod** with a *NodePort* or *LoadBalancer* service.
This not limits the ability to expose **not offloaded pods** like you would normally do.
In particular you will not be able to expose an **offloaded pod** with a *NodePort* or *LoadBalancer* service.
This does not limits the ability to expose **not offloaded pods** like you would normally do.

```{admonition} Note
Please consider that in kubernetes multicluster environments, the use of *NodePort* and *LoadBalancer*
Expand Down
28 changes: 28 additions & 0 deletions docs/usage/peer.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,34 @@ You should see the following output:
INFO (local) ResourceSlice resources: Accepted
```

#### MTU

Liqo uses a **WireGuard tunnel** to securely connect two clusters.
The tunnel introduces additional overhead due to encapsulation, which reduces the effective Maximum Transmission Unit (MTU) available for application traffic.
You can configure the MTU used by the Liqo network interfaces that handle the WireGuard tunnel with the `--mtu` flag in the `liqoctl peer` command.

If not specified, Liqo uses a **default MTU of 1340 bytes**, which is a conservative value designed to work in most network environments, including those with additional encapsulation (e.g., cloud providers, VPNs).

The MTU value should be set to the **(physical) link MTU minus the WireGuard overhead** (typically **60 bytes** for IPv4 or **80 bytes** for IPv6):

```text
WireGuard MTU = Link MTU - WireGuard Overhead
```

**Example**: If the network link between the two clusters has an MTU of **1500 bytes** (standard Ethernet), the WireGuard MTU should be:

- **IPv4**: `1500 - 60 = 1440`
- **IPv6**: `1500 - 80 = 1420`

To set the MTU when setting up a new peering:

```bash
liqoctl peer \
--kubeconfig=$CONSUMER_KUBECONFIG_PATH \
--remote-kubeconfig $PROVIDER_KUBECONFIG_PATH \
--mtu 1440
```

(UsagePeeringInBand)=

### In-Band
Expand Down