-
Notifications
You must be signed in to change notification settings - Fork 131
Closed
Labels
bugReport a bug encountered while operating LiqoReport a bug encountered while operating Liqo
Description
Is there an existing issue for this?
- I have searched the existing issues
Version
1.0.0
What happened?
I came up with the following script to automate declarative peering:
#!/usr/bin/env bash
set -exuo pipefail
IFS=$'\n\t'
#---------------------------------------
# Usage & Argument Validation
#---------------------------------------
usage() {
cat <<EOF
Usage: $0 <provider_cluster> <consumer_cluster>
provider_cluster Name of the provider EKS cluster (name should match a directory in here).
consumer_cluster Name of the consumer EKS cluster (name should match a directory in here).
EOF
exit 1
}
[[ $# -eq 2 ]] || usage
provider_cluster=$1
consumer_cluster=$2
#---------------------------------------
# Extract region from cluster name suffix
# – if it ends with e.g. “us-west-2” use that
# – otherwise default to us-east-1
#---------------------------------------
extract_region() {
local name=$1
if [[ $name =~ ([a-z]{2}-[a-z]+-[0-9]+)$ ]]; then
echo "${BASH_REMATCH[1]}"
else
echo "us-east-1"
fi
}
provider_region=$(extract_region "$provider_cluster")
consumer_region=$(extract_region "$consumer_cluster")
provider_context=arn:aws:eks:${provider_region}:${AWS_ACCOUNT_ID_DEV}:cluster/${provider_cluster}
consumer_context=arn:aws:eks:${consumer_region}:${AWS_ACCOUNT_ID_DEV}:cluster/${consumer_cluster}
#---------------------------------------
# Environment Variables
#---------------------------------------
export env=dev
#---------------------------------------
# Main Execution
#---------------------------------------
echo "Updating kubeconfig for consumer cluster"
aws eks update-kubeconfig \
--region "${consumer_region}" \
--name "${consumer_cluster}" \
--alias "${consumer_cluster}" \
--role-arn "${aws_role_arn}"
consumer_cluster_id=$(kubectl get configmaps -n $sg_cust_cluster_namespace liqo-clusterid-configmap --template {{.data.CLUSTER_ID}})
kubectl create ns $peer_ns_consumer --dry-run=client -o yaml | kubectl apply -f -
kubectl label ns $peer_ns_consumer liqo.io/tenant-namespace='true'
consumer_cluster_liqo_cidr=$(kubectl get networks.ipam.liqo.io -n $sg_cust_cluster_namespace external-cidr -o=jsonpath={'.status.cidr'})
consumer_cluster_pod_cidr=$(kubectl get networks.ipam.liqo.io -n $sg_cust_cluster_namespace pod-cidr -o=jsonpath={'.status.cidr'})
echo "Updating kubeconfig for provider cluster"
aws eks update-kubeconfig \
--region "${provider_region}" \
--name "${provider_cluster}" \
--alias "${provider_cluster}" \
--role-arn "${aws_role_arn}"
provider_cluster_id=$(kubectl get configmaps -n $sg_cust_cluster_namespace liqo-clusterid-configmap --template {{.data.CLUSTER_ID}})
provider_cluster_liqo_cidr=$(kubectl get networks.ipam.liqo.io -n $sg_cust_cluster_namespace external-cidr -o=jsonpath={'.status.cidr'})
provider_cluster_pod_cidr=$(kubectl get networks.ipam.liqo.io -n $sg_cust_cluster_namespace pod-cidr -o=jsonpath={'.status.cidr'})
kubectl create ns $peer_ns_provider --dry-run=client -o yaml | kubectl apply -f -
kubectl label ns $peer_ns_provider liqo.io/tenant-namespace='true'
kubectl label ns $peer_ns_provider liqo.io/remote-cluster-id=$consumer_cluster_id
kubectl apply -n $peer_ns_provider -f - <<EOF
apiVersion: networking.liqo.io/v1beta1
kind: Configuration
metadata:
labels:
liqo.io/remote-cluster-id: $consumer_cluster_id
name: $peer_ns_provider
spec:
remote:
cidr:
external: ["$consumer_cluster_liqo_cidr"]
pod: ["$consumer_cluster_pod_cidr"]
EOF
kubectl config use-context $consumer_cluster
kubectl label ns $peer_ns_consumer liqo.io/remote-cluster-id=$provider_cluster_id
kubectl apply -n $peer_ns_consumer -f - <<EOF
apiVersion: networking.liqo.io/v1beta1
kind: Configuration
metadata:
labels:
liqo.io/remote-cluster-id: $provider_cluster_id
name: $peer_ns_consumer
spec:
remote:
cidr:
external: ["$provider_cluster_liqo_cidr"]
pod: ["$provider_cluster_pod_cidr"]
EOF
openssl genpkey -algorithm X25519 -outform der -out private_consumer.der
openssl pkey -inform der -in private_consumer.der -pubout -outform der -out public_consumer.der
wg_private_key_consumer=$(cat private_consumer.der | tail -c 32 | base64)
wg_public_key_consumer=$(cat public_consumer.der | tail -c 32 | base64)
openssl genpkey -algorithm X25519 -outform der -out private_provider.der
openssl pkey -inform der -in private_provider.der -pubout -outform der -out public_provider.der
wg_private_key_provider=$(cat private_provider.der | tail -c 32 | base64)
wg_public_key_provider=$(cat public_provider.der | tail -c 32 | base64)
kubectl apply -n $peer_ns_consumer -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
labels:
liqo.io/remote-cluster-id: $provider_cluster_id
name: gw-keys
type: Opaque
data:
privateKey: $wg_private_key_consumer
publicKey: $wg_public_key_consumer
EOF
kubectl apply -n $peer_ns_consumer -f - <<EOF
apiVersion: networking.liqo.io/v1beta1
kind: GatewayClient
metadata:
creationTimestamp: null
labels:
liqo.io/remote-cluster-id: $provider_cluster_id
name: client
spec:
clientTemplateRef:
apiVersion: networking.liqo.io/v1beta1
kind: WgGatewayClientTemplate
name: wireguard-client
namespace: $sg_cust_cluster_namespace
secretRef:
name: gw-keys
endpoint:
addresses:
- <REMOTE_IP>
port: 30742
protocol: UDP
mtu: 1340
EOF
kubectl apply -n $peer_ns_consumer -f - <<EOF
apiVersion: networking.liqo.io/v1beta1
kind: PublicKey
metadata:
labels:
liqo.io/remote-cluster-id: $provider_cluster_id
networking.liqo.io/gateway-resource: "true"
name: gw-publickey
spec:
publicKey: $wg_public_key_provider
EOF
kubectl config use-context $provider_cluster
kubectl apply -n $peer_ns_provider -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
labels:
liqo.io/remote-cluster-id: $consumer_cluster_id
name: gw-keys
type: Opaque
data:
privateKey: $wg_private_key_provider
publicKey: $wg_public_key_provider
EOF
kubectl apply -n $peer_ns_provider -f - <<EOF
apiVersion: networking.liqo.io/v1beta1
kind: GatewayServer
metadata:
labels:
liqo.io/remote-cluster-id: $consumer_cluster_id
name: server
spec:
endpoint:
port: 51840
serviceType: LoadBalancer
mtu: 1340
secretRef:
name: gw-keys
serverTemplateRef:
apiVersion: networking.liqo.io/v1beta1
kind: WgGatewayServerTemplate
name: wireguard-server
namespace: $sg_cust_cluster_namespace
EOF
kubectl apply -n $peer_ns_provider -f - <<EOF
apiVersion: networking.liqo.io/v1beta1
kind: PublicKey
metadata:
labels:
liqo.io/remote-cluster-id: $consumer_cluster_id
networking.liqo.io/gateway-resource: "true"
name: gw-publickey
spec:
publicKey: $wg_public_key_consumer
EOF
kubectl apply -n $peer_ns_provider -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
liqo.io/remote-cluster-id: $consumer_cluster_id
name: liqo-binding-liqo-remote-controlplane
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: liqo-remote-controlplane
subjects:
- kind: User
name: $iam_user
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -n $peer_ns_provider -f - <<EOF
apiVersion: authentication.liqo.io/v1beta1
kind: Tenant
metadata:
labels:
liqo.io/remote-cluster-id: $consumer_cluster_id
name: $peer_ns_provider
spec:
clusterID: $consumer_cluster_id
authzPolicy: TolerateNoHandshake
tenantCondition: Active
EOF
TOKEN=$(aws eks get-token \
--cluster-name $provider_cluster \
--query 'status.token' --output text)
kubectl config set-credentials $iam_user --token="$TOKEN"
kubectl config set-context $iam_user \
--cluster=$provider_context \
--user=$iam_user
kubectl config use-context $iam_user
kubectl auth can-i get resourceslice -n $peer_ns_consumer
kubecfg=$(kubectl config view --raw --minify | base64 | tr -d '\r\n')
kubectl config use-context $consumer_cluster
kubectl apply -n $peer_ns_consumer -f - <<EOF
apiVersion: v1
data:
kubeconfig: ${kubecfg}
kind: Secret
metadata:
labels:
liqo.io/identity-type: ControlPlane
liqo.io/remote-cluster-id: $provider_cluster_id
annotations:
liqo.io/remote-tenant-namespace: $peer_ns_consumer
name: cplane-secret
EOF
kubectl apply -n $peer_ns_consumer -f - <<EOF
apiVersion: authentication.liqo.io/v1beta1
kind: ResourceSlice
metadata:
annotations:
liqo.io/create-virtual-node: "true"
creationTimestamp: null
labels:
liqo.io/remote-cluster-id: $provider_cluster_id
liqo.io/remoteID: $provider_cluster_id
name: test
spec:
class: default
providerClusterID: $provider_cluster_id
resources:
cpu: 16
memory: 56Gi
EOF
The CRD replicator pod logs display the output that the docs exemplify, but the virtual node is not created, and the ResourceSlice CR status is not updated. What am I missing?
Also, unrelated: if you could provide a better way to embed into the kubeconfig for the consumer cluster an EKS token that does not expire, it would be great, since after 15 minutes of the script being executed the secret stops working.
Relevant log output
How can we reproduce the issue?
- Run the provided script, making sure to provide the params in the script execution, the AWS_ACCOUNT_ID_DEV, peer_ns_consumer, peer_ns_provider env vars, and that the IAM user being used has access to both EKS clusters.
Provider or distribution
EKS
CNI version
No response
Kernel Version
No response
Kubernetes Version
1.31
Code of Conduct
- I agree to follow this project's Code of Conduct
Metadata
Metadata
Assignees
Labels
bugReport a bug encountered while operating LiqoReport a bug encountered while operating Liqo