diff --git a/gloo-mesh/core/2-5/default/README.md b/gloo-mesh/core/2-5/default/README.md index adf36f3a52..252b8370a8 100644 --- a/gloo-mesh/core/2-5/default/README.md +++ b/gloo-mesh/core/2-5/default/README.md @@ -15,7 +15,7 @@ source ./scripts/assert.sh ## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) * [Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-3---deploy-istio-using-gloo-mesh-lifecycle-manager-) * [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) @@ -68,7 +68,7 @@ You can find more information about Gloo Mesh Core in the official documentation -## Lab 1 - Deploy KinD clusters +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -81,14 +81,13 @@ export CLUSTER1=cluster1 export CLUSTER2=cluster2 ``` -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-aws.sh 1 mgmt -./scripts/deploy-aws.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws.sh 3 cluster2 us-west us-west-2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -99,27 +98,8 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: ``` CURRENT NAME CLUSTER AUTHINFO NAMESPACE @@ -138,7 +118,8 @@ cat <<'EOF' > ./test.js const helpers = require('./tests/chai-exec'); describe("Clusters are healthy", () => { - const clusters = [process.env.MGMT, process.env.CLUSTER1, process.env.CLUSTER2]; + const clusters = ["mgmt", "cluster1", "cluster2"]; + clusters.forEach(cluster => { it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); }); @@ -150,6 +131,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 2 - Deploy and register Gloo Mesh [VIDEO LINK](https://youtu.be/djfFiepK4GY "Video Link") @@ -190,6 +172,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -490,6 +473,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") diff --git a/gloo-mesh/core/2-5/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/core/2-5/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..31b0806b9b --- /dev/null +++ b/gloo-mesh/core/2-5/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,289 @@ +#!/usr/bin/env bash +set -o errexit + +number="2" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=mgmt +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh mgmt +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +kubectl config use-context ${MGMT} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["mgmt", "cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GLOO_MESH_VERSION=v2.5.12 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --set featureGates.insightsConfiguration=true \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.5.12 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 \ + -f -< ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-20 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml + +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 +kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml + +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-20 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions +kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + -f data/steps/deploy-bookinfo/reviews-v3.yaml +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} + +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n httpbin get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); +const puppeteer = require('puppeteer'); +const chai = require('chai'); +const expect = chai.expect; +const GraphPage = require('./tests/pages/gloo-ui/graph-page'); +const { recognizeTextFromScreenshot } = require('./tests/utils/image-ocr-processor'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("graph page", function () { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let page; + let graphPage; + + beforeEach(async function () { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + page = await browser.newPage(); + graphPage = new GraphPage(page); + await Promise.all(Array.from({ length: 20 }, () => + helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 }))); + }); + + afterEach(async function () { + await browser.close(); + }); + + it("should show ingress gateway and product page", async function () { + await graphPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/graph`); + + // Select the clusters and namespaces so that the graph shows + await graphPage.selectClusters(['cluster1', 'cluster2']); + await graphPage.selectNamespaces(['istio-gateways', 'bookinfo-backends', 'bookinfo-frontends']); + // Disabling Cilium nodes due to this issue: https://github.com/solo-io/gloo-mesh-enterprise/issues/18623 + await graphPage.toggleLayoutSettings(); + await graphPage.disableCiliumNodes(); + await graphPage.toggleLayoutSettings(); + + // Capture a screenshot of the canvas and run text recognition + await graphPage.fullscreenGraph(); + await graphPage.centerGraph(); + const screenshotPath = 'ui-test-data/canvas.png'; + await graphPage.captureCanvasScreenshot(screenshotPath); + + const recognizedTexts = await recognizeTextFromScreenshot( + screenshotPath, + ["istio-ingressgateway", "productpage-v1", "details-v1", "ratings-v1", "reviews-v1", "reviews-v2"]); + + const flattenedRecognizedText = recognizedTexts.join(",").replace(/\n/g, ''); + console.log("Flattened recognized text:", flattenedRecognizedText); + + // Validate recognized texts + expect(flattenedRecognizedText).to.include("istio-ingressgateway"); + expect(flattenedRecognizedText).to.include("productpage-v1"); + expect(flattenedRecognizedText).to.include("details-v1"); + expect(flattenedRecognizedText).to.include("ratings-v1"); + expect(flattenedRecognizedText).to.include("reviews-v1"); + expect(flattenedRecognizedText).to.include("reviews-v2"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/graph-shows-traffic.test.js.liquid" +timeout --signal=INT 7m mocha ./test.js --timeout 120000 --retries=3 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +var chai = require('chai'); +var expect = chai.expect; +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should displays BP0001 warning with text 'Globally scoped routing'", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("Globally scoped routing"))).to.be.true; + }); + + it("should have quick resource state filters", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + const healthy = await insightsPage.getHealthyResourcesCount(); + const warning = await insightsPage.getWarningResourcesCount(); + const error = await insightsPage.getErrorResourcesCount(); + expect(healthy).to.be.greaterThan(0); + expect(warning).to.be.greaterThan(0); + expect(error).to.be.a('number'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight BP0002 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx:1.25.3 --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /solo_io_insights{.*BP0002.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight BP0002 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=solo_io_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "BP0002" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0002 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0002.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0001 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /solo_io_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight CFG0001 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=solo_io_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /solo_io_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight CFG0001 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=solo_io_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualservice reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete destinationrule reviews +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /solo_io_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight SEC0008 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=solo_io_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /solo_io_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight SEC0008 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=solo_io_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy reviews +kubectl --context ${CLUSTER1} -n istio-system delete peerauthentication default diff --git a/gloo-mesh/core/2-5/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/core/2-5/default/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/core/2-5/default/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/core/2-5/default/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/core/2-5/default/scripts/register-domain.sh b/gloo-mesh/core/2-5/default/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/core/2-5/default/scripts/register-domain.sh +++ b/gloo-mesh/core/2-5/default/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/core/2-5/default/tests/chai-exec.js b/gloo-mesh/core/2-5/default/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/core/2-5/default/tests/chai-exec.js +++ b/gloo-mesh/core/2-5/default/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/core/2-5/default/tests/chai-http.js b/gloo-mesh/core/2-5/default/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/core/2-5/default/tests/chai-http.js +++ b/gloo-mesh/core/2-5/default/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/core/2-5/default/tests/proxies-changes.test.js.liquid b/gloo-mesh/core/2-5/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/core/2-5/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/core/2-6/ambient-interoperability/README.md b/gloo-mesh/core/2-6/ambient-interoperability/README.md index 84a621c817..ba8096bfd0 100644 --- a/gloo-mesh/core/2-6/ambient-interoperability/README.md +++ b/gloo-mesh/core/2-6/ambient-interoperability/README.md @@ -9,13 +9,13 @@ source ./scripts/assert.sh -#
Gloo Mesh Core (2.6.6) Ambient Interoperability
+#
Gloo Mesh Core (2.6.7) Ambient Interoperability
## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy a KinD cluster](#lab-1---deploy-a-kind-cluster-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) * [Lab 3 - Deploy Istio using Helm](#lab-3---deploy-istio-using-helm-) * [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) @@ -72,7 +72,7 @@ You can find more information about Gloo Mesh Core in the official documentation -## Lab 1 - Deploy a KinD cluster +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -84,12 +84,11 @@ export MGMT=cluster1 export CLUSTER1=cluster1 ``` -Run the following commands to deploy a Kubernetes cluster using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-multi-with-calico.sh 1 cluster1 us-west us-west-1 +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -98,38 +97,20 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. @@ -143,7 +124,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || Before we get started, let's install the `meshctl` CLI: ```bash -export GLOO_MESH_VERSION=v2.6.6 +export GLOO_MESH_VERSION=v2.6.7 curl -sL https://run.solo.io/meshctl/install | sh - export PATH=$HOME/.gloo-mesh/bin:$PATH ``` @@ -175,6 +156,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -185,13 +167,13 @@ helm upgrade --install gloo-platform-crds gloo-platform-crds \ --namespace gloo-mesh \ --kube-context ${MGMT} \ --set featureGates.insightsConfiguration=true \ - --version 2.6.6 + --version 2.6.7 helm upgrade --install gloo-platform-mgmt gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< ./test.js var chai = require('chai'); @@ -255,6 +241,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 3 - Deploy Istio using Helm @@ -535,8 +522,6 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || - - ## Lab 4 - Deploy the Bookinfo demo app [VIDEO LINK](https://youtu.be/nzYcrjalY5A "Video Link") @@ -1129,7 +1114,7 @@ describe("gateway API", function() { }); EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/l7-authz-interoperability/tests/is-waypoint-created.test.js.liquid" -timeout --signal=INT 3m mocha ./test.js --timeout 1000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout --signal=INT 3m mocha ./test.js --timeout 60000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -198,13 +168,13 @@ helm upgrade --install gloo-platform-crds gloo-platform-crds \ --namespace gloo-mesh \ --kube-context ${MGMT} \ --set featureGates.insightsConfiguration=true \ - --version 2.6.6 + --version 2.6.7 helm upgrade --install gloo-platform-mgmt gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< Create intermediate CAs in both clusters and the Root CA. @@ -551,7 +522,7 @@ describe("istio_version is at least 1.23.0", () => { it("version should be at least 1.23.0", () => { // Compare the string istio_version to the number 1.23.0 // example 1.23.0-patch0 is valid, but 1.22.6 is not - let version = "1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864"; + let version = "1.23.1"; let versionParts = version.split('-')[0].split('.'); let major = parseInt(versionParts[0]); let minor = parseInt(versionParts[1]); @@ -745,42 +716,30 @@ EOF ``` Let's deploy Istio using Helm in cluster1. We'll install the base Istio components, the Istiod control plane, the Istio CNI, the ztunnel, and the ingress/eastwest gateways. -For private registries, let's first load the images into kind: -```bash -KIND_NAME=$(kubectl config get-contexts ${CLUSTER1} | grep ${CLUSTER1} | awk '{printf $3}' | cut -d'-' -f2) - -for image in pilot install-cni ztunnel proxyv2; do - docker pull "us-docker.pkg.dev/istio-enterprise-private/internal-istio-builds/${image}:1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864" - docker pull "us-docker.pkg.dev/istio-enterprise-private/internal-istio-builds/${image}:1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864-distroless" - kind load docker-image --name "$KIND_NAME" "us-docker.pkg.dev/istio-enterprise-private/internal-istio-builds/${image}:1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864" - kind load docker-image --name "$KIND_NAME" "us-docker.pkg.dev/istio-enterprise-private/internal-istio-builds/${image}:1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864-distroless" -done -``` - ```bash -helm upgrade --install istio-base oci://us-docker.pkg.dev/istio-enterprise-private/internal-istio-helm/base \ +helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/base \ --namespace istio-system \ --kube-context=${CLUSTER1} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - </istiod \ --namespace istio-system \ --kube-context=${CLUSTER1} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - < proxy: clusterDomain: cluster.local - tag: 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 + tag: 1.23.1-solo multiCluster: clusterName: cluster1 profile: ambient @@ -806,15 +765,15 @@ pilot: enabled: true EOF -helm upgrade --install istio-cni oci://us-docker.pkg.dev/istio-enterprise-private/internal-istio-helm/cni \ +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ --namespace kube-system \ --kube-context=${CLUSTER1} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - < + proxy: 1.23.1-solo profile: ambient cni: ambient: @@ -824,10 +783,10 @@ cni: - kube-system EOF -helm upgrade --install ztunnel oci://us-docker.pkg.dev/istio-enterprise-private/internal-istio-helm/ztunnel \ +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ --namespace istio-system \ --kube-context=${CLUSTER1} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - < istioNamespace: istio-system multiCluster: clusterName: cluster1 @@ -843,15 +802,15 @@ namespace: istio-system profile: ambient proxy: clusterDomain: cluster.local -tag: 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 +tag: 1.23.1-solo terminationGracePeriodSeconds: 29 variant: distroless EOF -helm upgrade --install istio-ingressgateway- oci://us-docker.pkg.dev/istio-enterprise-private/internal-istio-helm/gateway \ +helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER1} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - </gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER1} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - < /dev ``` Let's deploy Istio using Helm in cluster2. We'll install the base Istio components, the Istiod control plane, the Istio CNI, the ztunnel, and the ingress/eastwest gateways. -For private registries, let's first load the images into kind: -```bash -KIND_NAME=$(kubectl config get-contexts ${CLUSTER2} | grep ${CLUSTER2} | awk '{printf $3}' | cut -d'-' -f2) - -for image in pilot install-cni ztunnel proxyv2; do - docker pull "us-docker.pkg.dev/istio-enterprise-private/internal-istio-builds/${image}:1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864" - docker pull "us-docker.pkg.dev/istio-enterprise-private/internal-istio-builds/${image}:1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864-distroless" - kind load docker-image --name "$KIND_NAME" "us-docker.pkg.dev/istio-enterprise-private/internal-istio-builds/${image}:1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864" - kind load docker-image --name "$KIND_NAME" "us-docker.pkg.dev/istio-enterprise-private/internal-istio-builds/${image}:1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864-distroless" -done -``` - ```bash -helm upgrade --install istio-base oci://us-docker.pkg.dev/istio-enterprise-private/internal-istio-helm/base \ +helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/base \ --namespace istio-system \ --kube-context=${CLUSTER2} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - </istiod \ --namespace istio-system \ --kube-context=${CLUSTER2} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - < proxy: clusterDomain: cluster.local - tag: 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 + tag: 1.23.1-solo multiCluster: clusterName: cluster2 profile: ambient @@ -956,15 +903,15 @@ pilot: enabled: true EOF -helm upgrade --install istio-cni oci://us-docker.pkg.dev/istio-enterprise-private/internal-istio-helm/cni \ +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ --namespace kube-system \ --kube-context=${CLUSTER2} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - < + proxy: 1.23.1-solo profile: ambient cni: ambient: @@ -974,10 +921,10 @@ cni: - kube-system EOF -helm upgrade --install ztunnel oci://us-docker.pkg.dev/istio-enterprise-private/internal-istio-helm/ztunnel \ +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ --namespace istio-system \ --kube-context=${CLUSTER2} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - < istioNamespace: istio-system multiCluster: clusterName: cluster2 @@ -993,15 +940,15 @@ namespace: istio-system profile: ambient proxy: clusterDomain: cluster.local -tag: 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 +tag: 1.23.1-solo terminationGracePeriodSeconds: 29 variant: distroless EOF -helm upgrade --install istio-ingressgateway- oci://us-docker.pkg.dev/istio-enterprise-private/internal-istio-helm/gateway \ +helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER2} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - </gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER2} \ ---version 1.24-alpha.a2295ca05a358e7c8e9edbbd3f500c8b4eb11864 \ +--version 1.23.1-solo \ --create-namespace \ -f - < [VIDEO LINK](https://youtu.be/w1xB-o_gHs0 "Video Link") @@ -1170,6 +1115,12 @@ kubectl --context ${CLUSTER1} create ns httpbin kubectl --context ${CLUSTER1} label namespace httpbin istio.io/dataplane-mode=ambient kubectl apply --context ${CLUSTER1} -f - < [!IMPORTANT] -> Limitations: +> Limitation: > -> * Workloads have to use the default service account. > * Multi-Network traffic is currently not supported by Istio Gateways, Sidecars, and Waypoints. Next, let's send some traffic across the clusters: diff --git a/gloo-mesh/core/2-6/ambient-multi-cluster/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/core/2-6/ambient-multi-cluster/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..1c6e42eb5e --- /dev/null +++ b/gloo-mesh/core/2-6/ambient-multi-cluster/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,292 @@ +#!/usr/bin/env bash +set -o errexit + +number="1" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=cluster1 +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GLOO_MESH_VERSION=v2.6.7 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --set featureGates.insightsConfiguration=true \ + --version 2.6.7 + +helm upgrade --install gloo-platform-mgmt gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.6.7 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.6.7 + +helm upgrade --install gloo-platform-agent gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.6.7 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +echo "Generating new certificates" +mkdir -p "./certs/${CLUSTER1}" +mkdir -p "./certs/${CLUSTER2}" + +if ! [ -x "$(command -v step)" ]; then + echo 'Error: Install the smallstep cli (https://github.com/smallstep/cli)' + exit 1 +fi + +step certificate create root.istio.ca ./certs/root-cert.pem ./certs/root-ca.key \ + --profile root-ca --no-password --insecure --san root.istio.ca \ + --not-after 87600h --kty RSA + +step certificate create $CLUSTER1 \ + ./certs/$CLUSTER1/ca-cert.pem \ + ./certs/$CLUSTER1/ca-key.pem \ + --ca ./certs/root-cert.pem \ + --ca-key ./certs/root-ca.key \ + --profile intermediate-ca \ + --not-after 87600h \ + --no-password \ + --san $CLUSTER1 \ + --kty RSA \ + --insecure + +step certificate create $CLUSTER2 \ + ./certs/$CLUSTER2/ca-cert.pem \ + ./certs/$CLUSTER2/ca-key.pem \ + --ca ./certs/root-cert.pem \ + --ca-key ./certs/root-ca.key \ + --profile intermediate-ca \ + --not-after 87600h \ + --no-password \ + --san $CLUSTER2 \ + --kty RSA \ + --insecure + +cat ./certs/$CLUSTER1/ca-cert.pem ./certs/root-cert.pem > ./certs/$CLUSTER1/cert-chain.pem +cat ./certs/$CLUSTER2/ca-cert.pem ./certs/root-cert.pem > ./certs/$CLUSTER2/cert-chain.pem +kubectl --context="${CLUSTER1}" create namespace istio-system || true +kubectl --context="${CLUSTER1}" create secret generic cacerts -n istio-system \ + --from-file=./certs/$CLUSTER1/ca-cert.pem \ + --from-file=./certs/$CLUSTER1/ca-key.pem \ + --from-file=./certs/root-cert.pem \ + --from-file=./certs/$CLUSTER1/cert-chain.pem + +kubectl --context="${CLUSTER2}" create namespace istio-system || true +kubectl --context="${CLUSTER2}" create secret generic cacerts -n istio-system \ + --from-file=./certs/$CLUSTER2/ca-cert.pem \ + --from-file=./certs/$CLUSTER2/ca-key.pem \ + --from-file=./certs/root-cert.pem \ + --from-file=./certs/$CLUSTER2/cert-chain.pem +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +describe("istio_version is at least 1.23.0", () => { + it("version should be at least 1.23.0", () => { + // Compare the string istio_version to the number 1.23.0 + // example 1.23.0-patch0 is valid, but 1.22.6 is not + let version = "1.23.1"; + let versionParts = version.split('-')[0].split('.'); + let major = parseInt(versionParts[0]); + let minor = parseInt(versionParts[1]); + let patch = parseInt(versionParts[2]); + let minMajor = 1; + let minMinor = 23; + let minPatch = 0; + expect(major).to.be.at.least(minMajor); + if (major === minMajor) { + expect(minor).to.be.at.least(minMinor); + if (minor === minMinor) { + expect(patch).to.be.at.least(minPatch); + } + } + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - </base \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.23.1-solo + multiCluster: + clusterName: cluster1 +profile: ambient +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster.local +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" + podLabels: + hack: eastwest + platforms: + peering: + enabled: true +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < + proxy: 1.23.1-solo +profile: ambient +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster1 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.23.1-solo +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER1} apply -f -; } +kubectl --context ${CLUSTER2} get crd gateways.gateway.networking.k8s.io &> /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER2} apply -f -; } +helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/base \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.23.1-solo + multiCluster: + clusterName: cluster2 +profile: ambient +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster.local +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" + podLabels: + hack: eastwest + platforms: + peering: + enabled: true +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < + proxy: 1.23.1-solo +profile: ambient +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster2 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.23.1-solo +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER1} apply -f -; } +kubectl --context ${CLUSTER2} get crd gateways.gateway.networking.k8s.io &> /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER2} apply -f -; } +cat <<'EOF' > ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl --context ${CLUSTER1} label namespace httpbin istio.io/dataplane-mode=ambient +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER2} create ns httpbin +kubectl --context ${CLUSTER2} label namespace httpbin istio.io/dataplane-mode=ambient +kubectl apply --context ${CLUSTER2} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl apply --context ${CLUSTER2} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns clients + +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("client apps", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh-with-sidecar", "in-ambient"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "clients", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/clients/deploy-clients/tests/check-clients.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 label namespace istio-system topology.istio.io/network=$CLUSTER1 +kubectl --context $CLUSTER2 label namespace istio-system topology.istio.io/network=$CLUSTER2 + cat < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); +const helpers = require('./tests/chai-exec'); + + +describe("ensure traffic goes to workloads in both clusters", () => { + it('should have two origins', async () => { + const origins = new Set(); + for (let i = 0; i < 10; i++) { + const command = await helpers.curlInDeployment({ + curlCommand: 'curl in-ambient.httpbin.global:8000/get', + deploymentName: 'in-ambient', + namespace: 'clients', + context: `${process.env.CLUSTER1}` + }); + const origin = JSON.parse(command).origin; + origins.add(origin); + } + expect(origins.size).to.equal(2); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/link-clusters/tests/check-cross-cluster-traffic.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } diff --git a/gloo-mesh/core/2-6/ambient-multi-cluster/scripts/configure-domain-rewrite.sh b/gloo-mesh/core/2-6/ambient-multi-cluster/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/core/2-6/ambient-multi-cluster/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/core/2-6/ambient-multi-cluster/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/core/2-6/ambient-multi-cluster/scripts/register-domain.sh b/gloo-mesh/core/2-6/ambient-multi-cluster/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/core/2-6/ambient-multi-cluster/scripts/register-domain.sh +++ b/gloo-mesh/core/2-6/ambient-multi-cluster/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/core/2-6/ambient-multi-cluster/tests/chai-exec.js b/gloo-mesh/core/2-6/ambient-multi-cluster/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/core/2-6/ambient-multi-cluster/tests/chai-exec.js +++ b/gloo-mesh/core/2-6/ambient-multi-cluster/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/core/2-6/ambient-multi-cluster/tests/chai-http.js b/gloo-mesh/core/2-6/ambient-multi-cluster/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/core/2-6/ambient-multi-cluster/tests/chai-http.js +++ b/gloo-mesh/core/2-6/ambient-multi-cluster/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/core/2-6/ambient-multi-cluster/tests/proxies-changes.test.js.liquid b/gloo-mesh/core/2-6/ambient-multi-cluster/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/core/2-6/ambient-multi-cluster/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/core/2-6/ambient/README.md b/gloo-mesh/core/2-6/ambient/README.md index 0c6e2bd447..26086a569b 100644 --- a/gloo-mesh/core/2-6/ambient/README.md +++ b/gloo-mesh/core/2-6/ambient/README.md @@ -9,15 +9,15 @@ source ./scripts/assert.sh -#
Gloo Mesh Core (2.6.6) Ambient
+#
Gloo Mesh Core (2.6.7) Ambient
## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) -* [Lab 3 - Deploy Istio using Helm](#lab-3---deploy-istio-using-helm-) +* [Lab 3 - Deploy Istio v1.23.1](#lab-3---deploy-istio-v1.23.1-) * [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) * [Lab 5 - Deploy the httpbin demo app](#lab-5---deploy-the-httpbin-demo-app-) * [Lab 6 - Deploy the clients to make requests to other services](#lab-6---deploy-the-clients-to-make-requests-to-other-services-) @@ -27,9 +27,11 @@ source ./scripts/assert.sh * [Lab 10 - Introduction to Insights](#lab-10---introduction-to-insights-) * [Lab 11 - Insights related to configuration errors](#lab-11---insights-related-to-configuration-errors-) * [Lab 12 - Insights related to security issues](#lab-12---insights-related-to-security-issues-) -* [Lab 13 - Deploy Istio using Helm](#lab-13---deploy-istio-using-helm-) -* [Lab 14 - Ambient Egress Traffic with Waypoint](#lab-14---ambient-egress-traffic-with-waypoint-) -* [Lab 15 - Waypoint Deployment Options](#lab-15---waypoint-deployment-options-) +* [Lab 13 - Upgrade Istio to v1.23.0-patch1](#lab-13---upgrade-istio-to-v1.23.0-patch1-) +* [Lab 14 - Migrate workloads to a new Istio revision](#lab-14---migrate-workloads-to-a-new-istio-revision-) +* [Lab 15 - Helm Cleanup Istio Revision](#lab-15---helm-cleanup-istio-revision-) +* [Lab 16 - Ambient Egress Traffic with Waypoint](#lab-16---ambient-egress-traffic-with-waypoint-) +* [Lab 17 - Waypoint Deployment Options](#lab-17---waypoint-deployment-options-) @@ -76,7 +78,7 @@ You can find more information about Gloo Mesh Core in the official documentation -## Lab 1 - Deploy KinD clusters +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -89,14 +91,13 @@ export CLUSTER1=cluster1 export CLUSTER2=cluster2 ``` -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-aws-with-calico.sh 1 mgmt -./scripts/deploy-aws-with-calico.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws-with-calico.sh 3 cluster2 us-west us-west-2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -107,27 +108,8 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: ``` CURRENT NAME CLUSTER AUTHINFO NAMESPACE @@ -146,7 +128,8 @@ cat <<'EOF' > ./test.js const helpers = require('./tests/chai-exec'); describe("Clusters are healthy", () => { - const clusters = [process.env.MGMT, process.env.CLUSTER1, process.env.CLUSTER2]; + const clusters = ["mgmt", "cluster1", "cluster2"]; + clusters.forEach(cluster => { it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); }); @@ -158,6 +141,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 2 - Deploy and register Gloo Mesh [VIDEO LINK](https://youtu.be/djfFiepK4GY "Video Link") @@ -165,7 +149,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || Before we get started, let's install the `meshctl` CLI: ```bash -export GLOO_MESH_VERSION=v2.6.6 +export GLOO_MESH_VERSION=v2.6.7 curl -sL https://run.solo.io/meshctl/install | sh - export PATH=$HOME/.gloo-mesh/bin:$PATH ``` @@ -198,6 +182,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -208,13 +193,13 @@ helm upgrade --install gloo-platform-crds gloo-platform-crds \ --namespace gloo-mesh \ --kube-context ${MGMT} \ --set featureGates.insightsConfiguration=true \ - --version 2.6.6 + --version 2.6.7 helm upgrade --install gloo-platform gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< + +## Lab 3 - Deploy Istio v1.23.1 It is convenient to have the `istioctl` command line tool installed on your local machine. If you don't have it installed, you can install it by following the instructions below. @@ -593,6 +579,7 @@ spec: selector: app: istio-ingressgateway istio: ingressgateway + revision: 1-23 type: LoadBalancer EOF @@ -650,6 +637,7 @@ spec: selector: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23 type: LoadBalancer EOF kubectl --context ${CLUSTER2} create ns istio-gateways @@ -676,6 +664,7 @@ spec: selector: app: istio-ingressgateway istio: ingressgateway + revision: 1-23 type: LoadBalancer EOF @@ -733,6 +722,7 @@ spec: selector: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23 type: LoadBalancer EOF ``` @@ -749,6 +739,7 @@ helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-< -f - </istiod \ @@ -765,6 +756,7 @@ global: multiCluster: clusterName: cluster1 profile: ambient +revision: 1-23 istio_cni: enabled: true meshConfig: @@ -792,6 +784,7 @@ global: hub: us-docker.pkg.dev/gloo-mesh/istio- proxy: 1.23.1-solo profile: ambient +revision: 1-23 cni: ambient: dnsCapture: true @@ -808,6 +801,7 @@ helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm- @@ -832,10 +826,12 @@ helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-me autoscaling: enabled: false profile: ambient +revision: 1-23 imagePullPolicy: IfNotPresent labels: app: istio-ingressgateway istio: ingressgateway + revision: 1-23 service: type: None EOF @@ -849,6 +845,7 @@ helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-m autoscaling: enabled: false profile: ambient +revision: 1-23 imagePullPolicy: IfNotPresent env: ISTIO_META_REQUESTED_NETWORK_VIEW: cluster1 @@ -856,6 +853,7 @@ env: labels: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23 topology.istio.io/network: cluster1 service: type: None @@ -881,6 +879,7 @@ helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-< -f - </istiod \ @@ -897,6 +896,7 @@ global: multiCluster: clusterName: cluster2 profile: ambient +revision: 1-23 istio_cni: enabled: true meshConfig: @@ -924,6 +924,7 @@ global: hub: us-docker.pkg.dev/gloo-mesh/istio- proxy: 1.23.1-solo profile: ambient +revision: 1-23 cni: ambient: dnsCapture: true @@ -940,6 +941,7 @@ helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm- @@ -964,10 +966,12 @@ helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-me autoscaling: enabled: false profile: ambient +revision: 1-23 imagePullPolicy: IfNotPresent labels: app: istio-ingressgateway istio: ingressgateway + revision: 1-23 service: type: None EOF @@ -981,6 +985,7 @@ helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-m autoscaling: enabled: false profile: ambient +revision: 1-23 imagePullPolicy: IfNotPresent env: ISTIO_META_REQUESTED_NETWORK_VIEW: cluster2 @@ -988,6 +993,7 @@ env: labels: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23 topology.istio.io/network: cluster2 service: type: None @@ -1110,8 +1116,6 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || - - ## Lab 4 - Deploy the Bookinfo demo app [VIDEO LINK](https://youtu.be/nzYcrjalY5A "Video Link") @@ -1128,6 +1132,8 @@ kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/datapl kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/dataplane-mode=ambient kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio-injection=disabled kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio-injection=disabled +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite # Deploy the frontend bookinfo service in the bookinfo-frontends namespace kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml @@ -1174,6 +1180,8 @@ kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/datapl kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/dataplane-mode=ambient kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio-injection=disabled kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio-injection=disabled +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite # Deploy the frontend bookinfo service in the bookinfo-frontends namespace kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml @@ -1254,6 +1262,7 @@ Run the following commands to deploy the httpbin app on `cluster1`. The deployme ```bash kubectl --context ${CLUSTER1} create ns httpbin kubectl --context ${CLUSTER1} label namespace httpbin istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER1} label namespace httpbin istio.io/rev=1-23 kubectl apply --context ${CLUSTER1} -f - < +## Lab 13 - Upgrade Istio to v1.23.0-patch1 @@ -3031,9 +3040,10 @@ helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-< -f - </istiod \ +helm upgrade --install istiod-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/istiod \ --namespace istio-system \ --kube-context=${CLUSTER1} \ --version 1.23.0-patch1-solo \ @@ -3047,6 +3057,7 @@ global: multiCluster: clusterName: cluster1 profile: ambient +revision: 1-23-0-patch1 istio_cni: enabled: true meshConfig: @@ -3074,6 +3085,7 @@ global: hub: us-docker.pkg.dev/gloo-mesh/istio- proxy: 1.23.0-patch1-solo profile: ambient +revision: 1-23-0-patch1 cni: ambient: dnsCapture: true @@ -3090,6 +3102,7 @@ helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm- @@ -3105,7 +3118,7 @@ terminationGracePeriodSeconds: 29 variant: distroless EOF -helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +helm upgrade --install istio-ingressgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER1} \ --version 1.23.0-patch1-solo \ @@ -3114,15 +3127,17 @@ helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-me autoscaling: enabled: false profile: ambient +revision: 1-23-0-patch1 imagePullPolicy: IfNotPresent labels: app: istio-ingressgateway istio: ingressgateway + revision: 1-23-0-patch1 service: type: None EOF -helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +helm upgrade --install istio-eastwestgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER1} \ --version 1.23.0-patch1-solo \ @@ -3131,6 +3146,7 @@ helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-m autoscaling: enabled: false profile: ambient +revision: 1-23-0-patch1 imagePullPolicy: IfNotPresent env: ISTIO_META_REQUESTED_NETWORK_VIEW: cluster1 @@ -3138,6 +3154,7 @@ env: labels: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23-0-patch1 topology.istio.io/network: cluster1 service: type: None @@ -3156,9 +3173,10 @@ helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-< -f - </istiod \ +helm upgrade --install istiod-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/istiod \ --namespace istio-system \ --kube-context=${CLUSTER2} \ --version 1.23.0-patch1-solo \ @@ -3172,6 +3190,7 @@ global: multiCluster: clusterName: cluster2 profile: ambient +revision: 1-23-0-patch1 istio_cni: enabled: true meshConfig: @@ -3199,6 +3218,7 @@ global: hub: us-docker.pkg.dev/gloo-mesh/istio- proxy: 1.23.0-patch1-solo profile: ambient +revision: 1-23-0-patch1 cni: ambient: dnsCapture: true @@ -3215,6 +3235,7 @@ helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm- @@ -3230,7 +3251,7 @@ terminationGracePeriodSeconds: 29 variant: distroless EOF -helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +helm upgrade --install istio-ingressgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER2} \ --version 1.23.0-patch1-solo \ @@ -3239,15 +3260,17 @@ helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-me autoscaling: enabled: false profile: ambient +revision: 1-23-0-patch1 imagePullPolicy: IfNotPresent labels: app: istio-ingressgateway istio: ingressgateway + revision: 1-23-0-patch1 service: type: None EOF -helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +helm upgrade --install istio-eastwestgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER2} \ --version 1.23.0-patch1-solo \ @@ -3256,6 +3279,7 @@ helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-m autoscaling: enabled: false profile: ambient +revision: 1-23-0-patch1 imagePullPolicy: IfNotPresent env: ISTIO_META_REQUESTED_NETWORK_VIEW: cluster2 @@ -3263,6 +3287,7 @@ env: labels: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23-0-patch1 topology.istio.io/network: cluster2 service: type: None @@ -3289,10 +3314,10 @@ afterEach(function (done) { }); describe("Checking Istio installation", function() { - it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); - it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); - it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); - it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 2 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 4 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 2 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 4 })); it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); cli.stderr.should.be.empty; @@ -3367,6 +3392,113 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + +## Lab 14 - Migrate workloads to a new Istio revision + +Now, let's label all namespaces to use the new revision and rollout all deployments so that their proxies connect to the new revision: + +```bash +kubectl --context ${CLUSTER1} get ns -l istio.io/rev=1-23 -o json | jq -r '.items[].metadata.name' | while read ns; do + kubectl --context ${CLUSTER1} label ns ${ns} istio.io/rev=1-23-0-patch1 --overwrite +done +kubectl --context ${CLUSTER2} get ns -l istio.io/rev=1-23 -o json | jq -r '.items[].metadata.name' | while read ns; do + kubectl --context ${CLUSTER2} label ns ${ns} istio.io/rev=1-23-0-patch1 --overwrite +done +kubectl --context ${CLUSTER1} -n httpbin patch deploy in-mesh --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"istio.io/rev\": \"1-23-0-patch1\" }}}}}" +kubectl --context ${CLUSTER1} -n clients patch deploy in-mesh-with-sidecar --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"istio.io/rev\": \"1-23-0-patch1\" }}}}}" +``` + + +Test that you can still access the `productpage` service through the Istio Ingress Gateway corresponding to the old revision using the command below: + +```bash +curl -k "https:///productpage" -I +``` + +You should get a response similar to the following one: + +``` +HTTP/2 200 +server: istio-envoy +date: Wed, 24 Aug 2022 14:58:22 GMT +content-type: application/json +content-length: 670 +access-control-allow-origin: * +access-control-allow-credentials: true +x-envoy-upstream-service-time: 7 +``` + + + +All good, so we can now configure the Istio gateway service(s) to use both revisions: + +```bash +kubectl --context ${CLUSTER1} -n istio-gateways patch svc istio-ingressgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER1} -n istio-gateways patch svc istio-eastwestgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER2} -n istio-gateways patch svc istio-ingressgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER2} -n istio-gateways patch svc istio-eastwestgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +``` + +We don't switch the selector directly from one the old revision to the new one to avoid any request to be dropped. + +Test that you can still access the `productpage` service: + +```bash +curl -k "https:///productpage" -I +``` + +You should get a response similar to the following one: + +``` +HTTP/2 200 +server: istio-envoy +date: Wed, 24 Aug 2022 14:58:22 GMT +content-type: application/json +content-length: 670 +access-control-allow-origin: * +access-control-allow-credentials: true +``` + + + + + +
Waypoints are upgraded automatically The waypoints are upgraded by Istiod's Gateway Controller, so if you check the status you will see that it is on the newest "1.23.0-patch1" version: @@ -3403,46 +3535,150 @@ describe("istio in place upgrades", function() { }); }); EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/waypoint-upgraded.test.js.liquid" +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/tests/waypoint-upgraded.test.js.liquid" timeout --signal=INT 1m mocha ./test.js --timeout 10000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> -Test that you can still access the `productpage` service through the Istio Ingress Gateway corresponding to the old revision using the command below: -```shell -curl -k "https:///productpage" -I + + +## Lab 15 - Helm Cleanup Istio Revision + +Everything is working well with the new version, we can uninstall the previous version. + +Let's start with the gateways + +```bash +helm uninstall istio-ingressgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} + +helm uninstall istio-eastwestgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} + +helm uninstall istio-ingressgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} + +helm uninstall istio-eastwestgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} ``` + + -```http,nocopy -HTTP/2 200 -server: istio-envoy -date: Wed, 24 Aug 2022 14:58:22 GMT -content-type: application/json -content-length: 670 -access-control-allow-origin: * -access-control-allow-credentials: true -x-envoy-upstream-service-time: 7 +And then the control plane: + +```bash +helm uninstall istiod-1-23 \ +--namespace istio-system \ +--kube-context=${CLUSTER1} + +helm uninstall istiod-1-23 \ +--namespace istio-system \ +--kube-context=${CLUSTER2} ``` + +Run the following command: + +```bash +kubectl --context ${CLUSTER1} -n istio-system get pods && kubectl --context ${CLUSTER1} -n istio-gateways get pods +``` + +You should get the following output: + +``` +NAME READY STATUS RESTARTS AGE +istiod-1-23-796fffbdf5-n6xc9 1/1 Running 0 25m +NAME READY STATUS RESTARTS AGE +istio-eastwestgateway-1-23-546446c77b-zg5hd 1/1 Running 0 25m +istio-ingressgateway-1-23-784f69b4bb-lcfk9 1/1 Running 0 25m +``` + +It confirms that only the new version is running. - -## Lab 14 - Ambient Egress Traffic with Waypoint +## Lab 16 - Ambient Egress Traffic with Waypoint In this lab, we'll explore how to control and secure outbound traffic from your Ambient Mesh using Waypoints. We'll start by restricting all outgoing traffic from a specific namespace, then set up a shared Waypoint to manage egress traffic centrally. This approach allows for consistent policy enforcement across multiple services and namespaces. @@ -3735,7 +3971,7 @@ kubectl --context ${CLUSTER1} delete authorizationpolicy httpbin -n egress -## Lab 15 - Waypoint Deployment Options +## Lab 17 - Waypoint Deployment Options This lab explores different ways to deploy Waypoints in Istio's Ambient Mesh. We'll learn about deploying Waypoints for services and for workloads. diff --git a/gloo-mesh/core/2-6/ambient/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/core/2-6/ambient/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..3fda068282 --- /dev/null +++ b/gloo-mesh/core/2-6/ambient/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,292 @@ +#!/usr/bin/env bash +set -o errexit + +number="2" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=mgmt +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh mgmt +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +kubectl config use-context ${MGMT} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["mgmt", "cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GLOO_MESH_VERSION=v2.6.7 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --set featureGates.insightsConfiguration=true \ + --version 2.6.7 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.6.7 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.6.7 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.6.7 \ + -f -< ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.6.7 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.6.7 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +describe("istio_version is at least 1.23.0", () => { + it("version should be at least 1.23.0", () => { + // Compare the string istio_version to the number 1.23.0 + // example 1.23.0-patch0 is valid, but 1.22.6 is not + let version = "1.23.1"; + let versionParts = version.split('-')[0].split('.'); + let major = parseInt(versionParts[0]); + let minor = parseInt(versionParts[1]); + let patch = parseInt(versionParts[2]); + let minMajor = 1; + let minMinor = 23; + let minPatch = 0; + expect(major).to.be.at.least(minMajor); + if (major === minMajor) { + expect(minor).to.be.at.least(minMinor); + if (minor === minMinor) { + expect(patch).to.be.at.least(minPatch); + } + } + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - </base \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.23.1-solo + multiCluster: + clusterName: cluster1 +profile: ambient +revision: 1-23 +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster1 +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < + proxy: 1.23.1-solo +profile: ambient +revision: 1-23 +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster1 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.23.1-solo +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER1} apply -f -; } +kubectl --context ${CLUSTER2} get crd gateways.gateway.networking.k8s.io &> /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER2} apply -f -; } +helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/base \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.23.1-solo + multiCluster: + clusterName: cluster2 +profile: ambient +revision: 1-23 +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster2 +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < + proxy: 1.23.1-solo +profile: ambient +revision: 1-23 +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster2 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.23.1-solo +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.23.1-solo \ +--create-namespace \ +-f - < /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER1} apply -f -; } +kubectl --context ${CLUSTER2} get crd gateways.gateway.networking.k8s.io &> /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER2} apply -f -; } +cat <<'EOF' > ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio-injection=disabled +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio-injection=disabled +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml + +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 +kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml + +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio-injection=disabled +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio-injection=disabled +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions +kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + -f data/steps/deploy-bookinfo/reviews-v3.yaml +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} + +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl --context ${CLUSTER1} label namespace httpbin istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER1} label namespace httpbin istio.io/rev=1-23 +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns clients + +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("client apps", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh-with-sidecar", "in-ambient"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "clients", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/clients/deploy-clients/tests/check-clients.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); +const puppeteer = require('puppeteer'); +const chai = require('chai'); +const expect = chai.expect; +const GraphPage = require('./tests/pages/gloo-ui/graph-page'); +const { recognizeTextFromScreenshot } = require('./tests/utils/image-ocr-processor'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("graph page", function () { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let page; + let graphPage; + + beforeEach(async function () { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + page = await browser.newPage(); + graphPage = new GraphPage(page); + await Promise.all(Array.from({ length: 20 }, () => + helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 }))); + }); + + afterEach(async function () { + await browser.close(); + }); + + it("should show ingress gateway and product page", async function () { + await graphPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/graph`); + + // Select the clusters and namespaces so that the graph shows + await graphPage.selectClusters(['cluster1', 'cluster2']); + await graphPage.selectNamespaces(['istio-gateways', 'bookinfo-backends', 'bookinfo-frontends']); + // Disabling Cilium nodes due to this issue: https://github.com/solo-io/gloo-mesh-enterprise/issues/18623 + await graphPage.toggleLayoutSettings(); + await graphPage.disableCiliumNodes(); + await graphPage.toggleLayoutSettings(); + + // Capture a screenshot of the canvas and run text recognition + await graphPage.fullscreenGraph(); + await graphPage.centerGraph(); + const screenshotPath = 'ui-test-data/canvas.png'; + await graphPage.captureCanvasScreenshot(screenshotPath); + + const recognizedTexts = await recognizeTextFromScreenshot( + screenshotPath, + ["istio-ingressgateway", "productpage-v1", "details-v1", "ratings-v1", "reviews-v1", "reviews-v2"]); + + const flattenedRecognizedText = recognizedTexts.join(",").replace(/\n/g, ''); + console.log("Flattened recognized text:", flattenedRecognizedText); + + // Validate recognized texts + expect(flattenedRecognizedText).to.include("istio-ingressgateway"); + expect(flattenedRecognizedText).to.include("productpage-v1"); + expect(flattenedRecognizedText).to.include("details-v1"); + expect(flattenedRecognizedText).to.include("ratings-v1"); + expect(flattenedRecognizedText).to.include("reviews-v1"); + expect(flattenedRecognizedText).to.include("reviews-v2"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/graph-shows-traffic.test.js.liquid" +timeout --signal=INT 7m mocha ./test.js --timeout 120000 --retries=3 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const helpers = require('./tests/chai-http'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); + + it('should reject traffic to bookinfo-backends details', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Error fetching product details', + match: true + }) + }); + + it('should reject traffic to bookinfo-backends reviews', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Error fetching product reviews', + match: true + }) + }); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/authorization-policies/tests/bookinfo-backend-services-unavailable.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 60000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const helpers = require('./tests/chai-http'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); + + it('should admit traffic to bookinfo-backends details', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Book Details', + match: true + }) + }); + + it('should admit traffic to bookinfo-backends reviews', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Book Reviews', + match: true + }) + }); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/authorization-policies/tests/bookinfo-backend-services-available.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 60000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const helpers = require('./tests/chai-http'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); + + it('should reject traffic to bookinfo-backends details', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Error fetching product details', + match: true + }) + }); + + it('should reject traffic to bookinfo-backends reviews', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Error fetching product reviews', + match: true + }) + }); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/authorization-policies/tests/bookinfo-backend-services-unavailable.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 60000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const helpers = require('./tests/chai-http'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); + + it('should admit traffic to bookinfo-backends details', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Book Details', + match: true + }) + }); + + it('should admit traffic to bookinfo-backends reviews', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Book Reviews', + match: true + }) + }); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/authorization-policies/tests/bookinfo-backend-services-available.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 60000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy policy +for i in {1..20}; do curl -k "http://cluster1-bookinfo.example.com/productpage" -I; done +kubectl --context ${CLUSTER1} debug -n istio-system "$pod" -it --image=curlimages/curl -- curl http://localhost:15020/metrics | grep istio_request_ +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("L4 metrics available", function() { + it("ztunnel contains L4 and l7 metrics", () => { + let node = chaiExec(`kubectl --context ${process.env.CLUSTER1} -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].spec.nodeName}'`).stdout.replaceAll("'", ""); + let pods = JSON.parse(chaiExec(`kubectl --context ${process.env.CLUSTER1} -n istio-system get pods -l app=ztunnel -o json`).stdout).items; + let pod = ""; + pods.forEach(item => { + if(item.spec.nodeName == node) { + pod = item.metadata.name; + } + }); + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER1} -n istio-system debug ${pod} -it --image=curlimages/curl -- curl http://localhost:15020/metrics`); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain("istio_tcp_sent_bytes_total"); + expect(cli).output.to.contain("istio_requests_total"); + expect(cli).output.to.contain("istio_request_duration_milliseconds"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/l7-observability/tests/l4-l7-metrics-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context "${CLUSTER1}" -n istio-system logs ds/ztunnel +cat <<'EOF' > ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +var chai = require('chai'); +var expect = chai.expect; +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should displays BP0001 warning with text 'Globally scoped routing'", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("Globally scoped routing"))).to.be.true; + }); + + it("should have quick resource state filters", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + const healthy = await insightsPage.getHealthyResourcesCount(); + const warning = await insightsPage.getWarningResourcesCount(); + const error = await insightsPage.getErrorResourcesCount(); + expect(healthy).to.be.greaterThan(0); + expect(warning).to.be.greaterThan(0); + expect(error).to.be.a('number'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight BP0002 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx:1.25.3 --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*BP0002.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight BP0002 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "BP0002" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0002 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0002.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0001 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight CFG0001 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight CFG0001 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualservice reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete destinationrule reviews +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight SEC0008 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight SEC0008 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy reviews +kubectl --context ${CLUSTER1} -n istio-system delete peerauthentication default +helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/base \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.23.0-patch1-solo + multiCluster: + clusterName: cluster1 +profile: ambient +revision: 1-23-0-patch1 +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster1 +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < + proxy: 1.23.0-patch1-solo +profile: ambient +revision: 1-23-0-patch1 +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster1 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.23.0-patch1-solo +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </base \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.23.0-patch1-solo + multiCluster: + clusterName: cluster2 +profile: ambient +revision: 1-23-0-patch1 +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster2 +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < + proxy: 1.23.0-patch1-solo +profile: ambient +revision: 1-23-0-patch1 +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster2 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.23.0-patch1-solo +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 2 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 4 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 2 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 4 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} get ns -l istio.io/rev=1-23 -o json | jq -r '.items[].metadata.name' | while read ns; do + kubectl --context ${CLUSTER1} label ns ${ns} istio.io/rev=1-23-0-patch1 --overwrite +done +kubectl --context ${CLUSTER2} get ns -l istio.io/rev=1-23 -o json | jq -r '.items[].metadata.name' | while read ns; do + kubectl --context ${CLUSTER2} label ns ${ns} istio.io/rev=1-23-0-patch1 --overwrite +done +kubectl --context ${CLUSTER1} -n httpbin patch deploy in-mesh --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"istio.io/rev\": \"1-23-0-patch1\" }}}}}" +kubectl --context ${CLUSTER1} -n clients patch deploy in-mesh-with-sidecar --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"istio.io/rev\": \"1-23-0-patch1\" }}}}}" +kubectl --context ${CLUSTER1} -n httpbin rollout status deploy in-mesh +curl -k "https:///productpage" -I +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is accessible", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/../deploy-istio-helm/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n istio-gateways patch svc istio-ingressgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER1} -n istio-gateways patch svc istio-eastwestgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER2} -n istio-gateways patch svc istio-ingressgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER2} -n istio-gateways patch svc istio-eastwestgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +curl -k "https:///productpage" -I +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is accessible", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/../deploy-istio-helm/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is accessible", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/../deploy-istio-helm/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +const chai = require("chai"); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("istio in place upgrades", function() { + const cluster1 = process.env.CLUSTER1; + it("should upgrade waypoints", () => { + let cli = chaiExec(`sh -c "istioctl --context ${cluster1} ps | grep waypoint"`); + expect(cli.stdout).to.contain("1.23.0-patch1"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/tests/waypoint-upgraded.test.js.liquid" +timeout --signal=INT 1m mocha ./test.js --timeout 10000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +helm uninstall istio-ingressgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} + +helm uninstall istio-eastwestgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} + +helm uninstall istio-ingressgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} + +helm uninstall istio-eastwestgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} +kubectl --context ${CLUSTER1} -n istio-system get pods +kubectl --context ${CLUSTER2} -n istio-system get pods +kubectl --context ${CLUSTER1} -n istio-gateways get pods +kubectl --context ${CLUSTER2} -n istio-gateways get pods +ATTEMPTS=1 +until [[ $(kubectl --context ${CLUSTER1} -n istio-gateways get pods -l "istio.io/rev=1-23" -o json | jq '.items | length') -eq 0 ]] || [ $ATTEMPTS -gt 120 ]; do + printf "." + ATTEMPTS=$((ATTEMPTS + 1)) + sleep 1 +done +[ $ATTEMPTS -le 120 ] || kubectl --context ${CLUSTER1} -n istio-gateways get pods -l "istio.io/rev=1-23" + +ATTEMPTS=1 +until [[ $(kubectl --context ${CLUSTER2} -n istio-gateways get pods -l "istio.io/rev=1-23" -o json | jq '.items | length') -eq 0 ]] || [ $ATTEMPTS -gt 60 ]; do + printf "." + ATTEMPTS=$((ATTEMPTS + 1)) + sleep 1 +done +[ $ATTEMPTS -le 60 ] || kubectl --context ${CLUSTER2} -n istio-gateways get pods -l "istio.io/rev=1-23" +helm uninstall istiod-1-23 \ +--namespace istio-system \ +--kube-context=${CLUSTER1} + +helm uninstall istiod-1-23 \ +--namespace istio-system \ +--kube-context=${CLUSTER2} +ATTEMPTS=1 +until [[ $(kubectl --context ${CLUSTER1} -n istio-system get pods -l "istio.io/rev=1-23" -o json | jq '.items | length') -eq 0 ]] || [ $ATTEMPTS -gt 120 ]; do + printf "." + ATTEMPTS=$((ATTEMPTS + 1)) + sleep 1 +done +[ $ATTEMPTS -le 120 ] || kubectl --context ${CLUSTER1} -n istio-system get pods -l "istio.io/rev=1-23" +ATTEMPTS=1 +until [[ $(kubectl --context ${CLUSTER2} -n istio-system get pods -l "istio.io/rev=1-23" -o json | jq '.items | length') -eq 0 ]] || [ $ATTEMPTS -gt 60 ]; do + printf "." + ATTEMPTS=$((ATTEMPTS + 1)) + sleep 1 +done +[ $ATTEMPTS -le 60 ] || kubectl --context ${CLUSTER2} -n istio-system get pods -l "istio.io/rev=1-23" +kubectl --context ${CLUSTER1} -n istio-system get pods && kubectl --context ${CLUSTER1} -n istio-gateways get pods +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +describe("Old Istio version should be uninstalled", () => { + it("Pods aren't running anymore in CLUSTER1, namespace istio-system", () => { + let cli = chaiExec('kubectl --context ' + process.env.CLUSTER1 + ' -n istio-system get pods -l "istio.io/rev=' + process.env.OLD_REVISION +'" -o json'); + expect(cli).to.exit.with.code(0); + expect(JSON.parse(cli.stdout).items).to.have.lengthOf(0); + }); + it("Pods aren't running anymore in CLUSTER1, namespace istio-gateways", () => { + let cli = chaiExec('kubectl --context ' + process.env.CLUSTER1 + ' -n istio-gateways get pods -l "istio.io/rev=' + process.env.OLD_REVISION +'" -o json'); + expect(cli).to.exit.with.code(0); + expect(JSON.parse(cli.stdout).items).to.have.lengthOf(0); + }); + it("Pods aren't running anymore in CLUSTER2, namespace istio-system", () => { + let cli = chaiExec('kubectl --context ' + process.env.CLUSTER2 + ' -n istio-system get pods -l "istio.io/rev=' + process.env.OLD_REVISION +'" -o json'); + expect(cli).to.exit.with.code(0); + expect(JSON.parse(cli.stdout).items).to.have.lengthOf(0); + }); + it("Pods aren't running anymore in CLUSTER2, namespace istio-gateways", () => { + let cli = chaiExec('kubectl --context ' + process.env.CLUSTER2 + ' -n istio-gateways get pods -l "istio.io/rev=' + process.env.OLD_REVISION +'" -o json'); + expect(cli).to.exit.with.code(0); + expect(JSON.parse(cli.stdout).items).to.have.lengthOf(0); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-cleanup-revision/tests/previous-version-uninstalled.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("egress traffic", function() { + const cluster = process.env.CLUSTER1 + + it(`virtual service should add customer header`, function() { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -s httpbin.org/get`; + let cli = chaiExec(command); + expect(cli.output.toLowerCase()).to.contain('my-added-header'); + }); + + it(`destination rule should route to https`, function() { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -s httpbin.org/get`; + let cli = chaiExec(command); + expect(cli.output.toLowerCase()).to.contain('https://httpbin.org/get'); + }); + + it(`other types of traffic (HTTP methods) should be rejected`, function() { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -s -I httpbin.org/get`; + let cli = chaiExec(command); + expect(cli.output).to.contain('403 Forbidden'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/waypoint-egress/tests/validate-egress-traffic.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 20000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} delete authorizationpolicy httpbin -n egress +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("waypoint for service when ns is labeled", function() { + const cluster = process.env.CLUSTER1 + + it(`should redirect traffic for all services to the waypoint`, () => { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://ratings.bookinfo-backends:9080/ratings/0"`; + let cli = chaiExec(command); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('istio-envoy'); + + command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://reviews.bookinfo-backends:9080/reviews/0"`; + cli = chaiExec(command); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('istio-envoy'); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/waypoint-deployment-options/tests/validate-waypoint-for-service-ns.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 20000 --retries=10 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("service labeling to use a waypoint takes precedence over namespace labeling", function() { + const cluster = process.env.CLUSTER1 + + it(`should redirect traffic of labeled service through the waypoint and enforce the policy`, () => { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://ratings.bookinfo-backends:9080/ratings/0"`; + let cli = chaiExec(command); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('Forbidden'); + }); + + it(`should NOT redirect traffic of NON labeled services, which are redirected to the waypoint the namespace is configured for`, () => { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://reviews.bookinfo-backends:9080/reviews/0"`; + let cli = chaiExec(command); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('istio-envoy'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/waypoint-deployment-options/tests/validate-waypoint-for-specific-service.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 20000 --retries=10 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("waypoint for workloads when pod is labeled", function() { + const cluster = process.env.CLUSTER1 + + it(`should redirect traffic to waypoint`, () => { + let commandGetIP = `kubectl --context ${cluster} -n bookinfo-backends get pod -l app=ratings -o jsonpath='{.items[0].status.podIP}'`; + let cli = chaiExec(commandGetIP); + let podIP = cli.output.replace(/'/g, ''); + + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://${podIP}:9080/ratings/0"`; + cli = chaiExec(command); + + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('istio-envoy'); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/waypoint-deployment-options/tests/validate-waypoint-for-workload.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 20000 --retries=10 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends label pod -l app=ratings istio.io/use-waypoint- +kubectl --context ${CLUSTER1} -n bookinfo-backends label svc ratings istio.io/use-waypoint=ratings-waypoint +kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy deny-traffic-from-clients-ns +kubectl --context ${CLUSTER1} -n bookinfo-backends delete gateway waypoint ratings-waypoint ratings-workload-waypoint diff --git a/gloo-mesh/core/2-6/ambient/scripts/configure-domain-rewrite.sh b/gloo-mesh/core/2-6/ambient/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/core/2-6/ambient/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/core/2-6/ambient/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/core/2-6/ambient/scripts/register-domain.sh b/gloo-mesh/core/2-6/ambient/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/core/2-6/ambient/scripts/register-domain.sh +++ b/gloo-mesh/core/2-6/ambient/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/core/2-6/ambient/tests/chai-exec.js b/gloo-mesh/core/2-6/ambient/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/core/2-6/ambient/tests/chai-exec.js +++ b/gloo-mesh/core/2-6/ambient/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/core/2-6/ambient/tests/chai-http.js b/gloo-mesh/core/2-6/ambient/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/core/2-6/ambient/tests/chai-http.js +++ b/gloo-mesh/core/2-6/ambient/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/core/2-6/ambient/tests/proxies-changes.test.js.liquid b/gloo-mesh/core/2-6/ambient/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/core/2-6/ambient/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/core/2-6/default/README.md b/gloo-mesh/core/2-6/default/README.md index a512e1baa5..1348479a64 100644 --- a/gloo-mesh/core/2-6/default/README.md +++ b/gloo-mesh/core/2-6/default/README.md @@ -9,13 +9,13 @@ source ./scripts/assert.sh -#
Gloo Mesh Core (2.6.6)
+#
Gloo Mesh Core (2.6.7)
## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) * [Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-3---deploy-istio-using-gloo-mesh-lifecycle-manager-) * [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) @@ -68,7 +68,7 @@ You can find more information about Gloo Mesh Core in the official documentation -## Lab 1 - Deploy KinD clusters +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -81,14 +81,13 @@ export CLUSTER1=cluster1 export CLUSTER2=cluster2 ``` -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-aws.sh 1 mgmt -./scripts/deploy-aws.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws.sh 3 cluster2 us-west us-west-2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -99,27 +98,8 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: ``` CURRENT NAME CLUSTER AUTHINFO NAMESPACE @@ -138,7 +118,8 @@ cat <<'EOF' > ./test.js const helpers = require('./tests/chai-exec'); describe("Clusters are healthy", () => { - const clusters = [process.env.MGMT, process.env.CLUSTER1, process.env.CLUSTER2]; + const clusters = ["mgmt", "cluster1", "cluster2"]; + clusters.forEach(cluster => { it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); }); @@ -150,6 +131,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 2 - Deploy and register Gloo Mesh [VIDEO LINK](https://youtu.be/djfFiepK4GY "Video Link") @@ -157,7 +139,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || Before we get started, let's install the `meshctl` CLI: ```bash -export GLOO_MESH_VERSION=v2.6.6 +export GLOO_MESH_VERSION=v2.6.7 curl -sL https://run.solo.io/meshctl/install | sh - export PATH=$HOME/.gloo-mesh/bin:$PATH ``` @@ -190,6 +172,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -200,13 +183,13 @@ helm upgrade --install gloo-platform-crds gloo-platform-crds \ --namespace gloo-mesh \ --kube-context ${MGMT} \ --set featureGates.insightsConfiguration=true \ - --version 2.6.6 + --version 2.6.7 helm upgrade --install gloo-platform gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") diff --git a/gloo-mesh/core/2-6/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/core/2-6/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..31b0806b9b --- /dev/null +++ b/gloo-mesh/core/2-6/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,289 @@ +#!/usr/bin/env bash +set -o errexit + +number="2" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=mgmt +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh mgmt +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +kubectl config use-context ${MGMT} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["mgmt", "cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GLOO_MESH_VERSION=v2.6.7 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --set featureGates.insightsConfiguration=true \ + --version 2.6.7 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.6.7 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.6.7 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.6.7 \ + -f -< ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.6.7 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.6.7 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml + +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 +kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml + +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions +kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + -f data/steps/deploy-bookinfo/reviews-v3.yaml +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} + +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n httpbin get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); +const puppeteer = require('puppeteer'); +const chai = require('chai'); +const expect = chai.expect; +const GraphPage = require('./tests/pages/gloo-ui/graph-page'); +const { recognizeTextFromScreenshot } = require('./tests/utils/image-ocr-processor'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("graph page", function () { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let page; + let graphPage; + + beforeEach(async function () { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + page = await browser.newPage(); + graphPage = new GraphPage(page); + await Promise.all(Array.from({ length: 20 }, () => + helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 }))); + }); + + afterEach(async function () { + await browser.close(); + }); + + it("should show ingress gateway and product page", async function () { + await graphPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/graph`); + + // Select the clusters and namespaces so that the graph shows + await graphPage.selectClusters(['cluster1', 'cluster2']); + await graphPage.selectNamespaces(['istio-gateways', 'bookinfo-backends', 'bookinfo-frontends']); + // Disabling Cilium nodes due to this issue: https://github.com/solo-io/gloo-mesh-enterprise/issues/18623 + await graphPage.toggleLayoutSettings(); + await graphPage.disableCiliumNodes(); + await graphPage.toggleLayoutSettings(); + + // Capture a screenshot of the canvas and run text recognition + await graphPage.fullscreenGraph(); + await graphPage.centerGraph(); + const screenshotPath = 'ui-test-data/canvas.png'; + await graphPage.captureCanvasScreenshot(screenshotPath); + + const recognizedTexts = await recognizeTextFromScreenshot( + screenshotPath, + ["istio-ingressgateway", "productpage-v1", "details-v1", "ratings-v1", "reviews-v1", "reviews-v2"]); + + const flattenedRecognizedText = recognizedTexts.join(",").replace(/\n/g, ''); + console.log("Flattened recognized text:", flattenedRecognizedText); + + // Validate recognized texts + expect(flattenedRecognizedText).to.include("istio-ingressgateway"); + expect(flattenedRecognizedText).to.include("productpage-v1"); + expect(flattenedRecognizedText).to.include("details-v1"); + expect(flattenedRecognizedText).to.include("ratings-v1"); + expect(flattenedRecognizedText).to.include("reviews-v1"); + expect(flattenedRecognizedText).to.include("reviews-v2"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/graph-shows-traffic.test.js.liquid" +timeout --signal=INT 7m mocha ./test.js --timeout 120000 --retries=3 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +var chai = require('chai'); +var expect = chai.expect; +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should displays BP0001 warning with text 'Globally scoped routing'", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("Globally scoped routing"))).to.be.true; + }); + + it("should have quick resource state filters", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + const healthy = await insightsPage.getHealthyResourcesCount(); + const warning = await insightsPage.getWarningResourcesCount(); + const error = await insightsPage.getErrorResourcesCount(); + expect(healthy).to.be.greaterThan(0); + expect(warning).to.be.greaterThan(0); + expect(error).to.be.a('number'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight BP0002 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx:1.25.3 --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*BP0002.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight BP0002 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "BP0002" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0002 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0002.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0001 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight CFG0001 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight CFG0001 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualservice reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete destinationrule reviews +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight SEC0008 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight SEC0008 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy reviews +kubectl --context ${CLUSTER1} -n istio-system delete peerauthentication default diff --git a/gloo-mesh/core/2-6/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/core/2-6/default/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/core/2-6/default/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/core/2-6/default/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/core/2-6/default/scripts/register-domain.sh b/gloo-mesh/core/2-6/default/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/core/2-6/default/scripts/register-domain.sh +++ b/gloo-mesh/core/2-6/default/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/core/2-6/default/tests/chai-exec.js b/gloo-mesh/core/2-6/default/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/core/2-6/default/tests/chai-exec.js +++ b/gloo-mesh/core/2-6/default/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/core/2-6/default/tests/chai-http.js b/gloo-mesh/core/2-6/default/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/core/2-6/default/tests/chai-http.js +++ b/gloo-mesh/core/2-6/default/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/core/2-6/default/tests/proxies-changes.test.js.liquid b/gloo-mesh/core/2-6/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/core/2-6/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/core/2-7/ambient-interoperability/README.md b/gloo-mesh/core/2-7/ambient-interoperability/README.md index 88b4c82af9..6db56da1c3 100644 --- a/gloo-mesh/core/2-7/ambient-interoperability/README.md +++ b/gloo-mesh/core/2-7/ambient-interoperability/README.md @@ -15,7 +15,7 @@ source ./scripts/assert.sh ## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy a KinD cluster](#lab-1---deploy-a-kind-cluster-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) * [Lab 3 - Deploy Istio using Helm](#lab-3---deploy-istio-using-helm-) * [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) @@ -72,7 +72,7 @@ You can find more information about Gloo Mesh Core in the official documentation -## Lab 1 - Deploy a KinD cluster +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -84,12 +84,11 @@ export MGMT=cluster1 export CLUSTER1=cluster1 ``` -Run the following commands to deploy a Kubernetes cluster using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-multi-with-calico.sh 1 cluster1 us-west us-west-1 +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -98,38 +97,20 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. @@ -175,6 +156,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -236,6 +218,10 @@ EOF kubectl --context ${MGMT} -n gloo-mesh rollout status deploy/gloo-mesh-mgmt-server ``` +Set the endpoint for the Gloo Mesh UI: +```bash +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +``` + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -502,7 +487,8 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 3 - Deploy Istio using Helm + +## Lab 3 - Deploy Istio v1.24.1-patch1-distroless It is convenient to have the `istioctl` command line tool installed on your local machine. If you don't have it installed, you can install it by following the instructions below. @@ -544,7 +530,7 @@ describe("istio_version is at least 1.23.0", () => { it("version should be at least 1.23.0", () => { // Compare the string istio_version to the number 1.23.0 // example 1.23.0-patch0 is valid, but 1.22.6 is not - let version = "1.23.1"; + let version = "1.24.1-patch1-distroless"; let versionParts = version.split('-')[0].split('.'); let major = parseInt(versionParts[0]); let minor = parseInt(versionParts[1]); @@ -593,6 +579,7 @@ spec: selector: app: istio-ingressgateway istio: ingressgateway + revision: 1-23 type: LoadBalancer EOF @@ -650,6 +637,7 @@ spec: selector: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23 type: LoadBalancer EOF kubectl --context ${CLUSTER2} create ns istio-gateways @@ -676,6 +664,7 @@ spec: selector: app: istio-ingressgateway istio: ingressgateway + revision: 1-23 type: LoadBalancer EOF @@ -733,6 +722,7 @@ spec: selector: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23 type: LoadBalancer EOF ``` @@ -744,27 +734,29 @@ Let's deploy Istio using Helm in cluster1. We'll install the base Istio componen helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/base \ --namespace istio-system \ --kube-context=${CLUSTER1} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - </istiod \ --namespace istio-system \ --kube-context=${CLUSTER1} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - < proxy: clusterDomain: cluster.local - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless multiCluster: clusterName: cluster1 profile: ambient +revision: 1-23 istio_cni: enabled: true meshConfig: @@ -785,13 +777,14 @@ EOF helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ --namespace kube-system \ --kube-context=${CLUSTER1} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - < - proxy: 1.23.1-solo + proxy: 1.24.1-patch1-solo-distroless profile: ambient +revision: 1-23 cni: ambient: dnsCapture: true @@ -803,11 +796,12 @@ EOF helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ --namespace istio-system \ --kube-context=${CLUSTER1} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - < @@ -818,7 +812,7 @@ namespace: istio-system profile: ambient proxy: clusterDomain: cluster.local -tag: 1.23.1-solo +tag: 1.24.1-patch1-solo-distroless terminationGracePeriodSeconds: 29 variant: distroless EOF @@ -826,16 +820,18 @@ EOF helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER1} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - </gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER1} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - </base \ --namespace istio-system \ --kube-context=${CLUSTER2} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - </istiod \ --namespace istio-system \ --kube-context=${CLUSTER2} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - < proxy: clusterDomain: cluster.local - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless multiCluster: clusterName: cluster2 profile: ambient +revision: 1-23 istio_cni: enabled: true meshConfig: @@ -917,13 +917,14 @@ EOF helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ --namespace kube-system \ --kube-context=${CLUSTER2} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - < - proxy: 1.23.1-solo + proxy: 1.24.1-patch1-solo-distroless profile: ambient +revision: 1-23 cni: ambient: dnsCapture: true @@ -935,11 +936,12 @@ EOF helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ --namespace istio-system \ --kube-context=${CLUSTER2} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - < @@ -950,7 +952,7 @@ namespace: istio-system profile: ambient proxy: clusterDomain: cluster.local -tag: 1.23.1-solo +tag: 1.24.1-patch1-solo-distroless terminationGracePeriodSeconds: 29 variant: distroless EOF @@ -958,16 +960,18 @@ EOF helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER2} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - </gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER2} \ ---version 1.23.1-solo \ +--version 1.24.1-patch1-solo-distroless \ --create-namespace \ -f - < [VIDEO LINK](https://youtu.be/nzYcrjalY5A "Video Link") @@ -1128,6 +1132,8 @@ kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/datapl kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/dataplane-mode=ambient kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio-injection=disabled kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio-injection=disabled +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite # Deploy the frontend bookinfo service in the bookinfo-frontends namespace kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml @@ -1174,6 +1180,8 @@ kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/datapl kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/dataplane-mode=ambient kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio-injection=disabled kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio-injection=disabled +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite # Deploy the frontend bookinfo service in the bookinfo-frontends namespace kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml @@ -1254,6 +1262,7 @@ Run the following commands to deploy the httpbin app on `cluster1`. The deployme ```bash kubectl --context ${CLUSTER1} create ns httpbin kubectl --context ${CLUSTER1} label namespace httpbin istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER1} label namespace httpbin istio.io/rev=1-23 kubectl apply --context ${CLUSTER1} -f - < +## Lab 13 - Upgrade Istio to v1.23.0-patch1 @@ -3031,9 +3040,10 @@ helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-< -f - </istiod \ +helm upgrade --install istiod-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/istiod \ --namespace istio-system \ --kube-context=${CLUSTER1} \ --version 1.23.0-patch1-solo \ @@ -3047,6 +3057,7 @@ global: multiCluster: clusterName: cluster1 profile: ambient +revision: 1-23-0-patch1 istio_cni: enabled: true meshConfig: @@ -3074,6 +3085,7 @@ global: hub: us-docker.pkg.dev/gloo-mesh/istio- proxy: 1.23.0-patch1-solo profile: ambient +revision: 1-23-0-patch1 cni: ambient: dnsCapture: true @@ -3090,6 +3102,7 @@ helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm- @@ -3105,7 +3118,7 @@ terminationGracePeriodSeconds: 29 variant: distroless EOF -helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +helm upgrade --install istio-ingressgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER1} \ --version 1.23.0-patch1-solo \ @@ -3114,15 +3127,17 @@ helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-me autoscaling: enabled: false profile: ambient +revision: 1-23-0-patch1 imagePullPolicy: IfNotPresent labels: app: istio-ingressgateway istio: ingressgateway + revision: 1-23-0-patch1 service: type: None EOF -helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +helm upgrade --install istio-eastwestgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER1} \ --version 1.23.0-patch1-solo \ @@ -3131,6 +3146,7 @@ helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-m autoscaling: enabled: false profile: ambient +revision: 1-23-0-patch1 imagePullPolicy: IfNotPresent env: ISTIO_META_REQUESTED_NETWORK_VIEW: cluster1 @@ -3138,6 +3154,7 @@ env: labels: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23-0-patch1 topology.istio.io/network: cluster1 service: type: None @@ -3156,9 +3173,10 @@ helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-< -f - </istiod \ +helm upgrade --install istiod-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/istiod \ --namespace istio-system \ --kube-context=${CLUSTER2} \ --version 1.23.0-patch1-solo \ @@ -3172,6 +3190,7 @@ global: multiCluster: clusterName: cluster2 profile: ambient +revision: 1-23-0-patch1 istio_cni: enabled: true meshConfig: @@ -3199,6 +3218,7 @@ global: hub: us-docker.pkg.dev/gloo-mesh/istio- proxy: 1.23.0-patch1-solo profile: ambient +revision: 1-23-0-patch1 cni: ambient: dnsCapture: true @@ -3215,6 +3235,7 @@ helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm- @@ -3230,7 +3251,7 @@ terminationGracePeriodSeconds: 29 variant: distroless EOF -helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +helm upgrade --install istio-ingressgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER2} \ --version 1.23.0-patch1-solo \ @@ -3239,15 +3260,17 @@ helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-me autoscaling: enabled: false profile: ambient +revision: 1-23-0-patch1 imagePullPolicy: IfNotPresent labels: app: istio-ingressgateway istio: ingressgateway + revision: 1-23-0-patch1 service: type: None EOF -helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +helm upgrade --install istio-eastwestgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ --namespace istio-gateways \ --kube-context=${CLUSTER2} \ --version 1.23.0-patch1-solo \ @@ -3256,6 +3279,7 @@ helm upgrade --install istio-eastwestgateway-1-23 oci://us-docker.pkg.dev/gloo-m autoscaling: enabled: false profile: ambient +revision: 1-23-0-patch1 imagePullPolicy: IfNotPresent env: ISTIO_META_REQUESTED_NETWORK_VIEW: cluster2 @@ -3263,6 +3287,7 @@ env: labels: app: istio-ingressgateway istio: eastwestgateway + revision: 1-23-0-patch1 topology.istio.io/network: cluster2 service: type: None @@ -3289,10 +3314,10 @@ afterEach(function (done) { }); describe("Checking Istio installation", function() { - it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); - it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); - it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); - it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 2 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 4 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 2 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 4 })); it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); cli.stderr.should.be.empty; @@ -3367,6 +3392,113 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + +## Lab 14 - Migrate workloads to a new Istio revision + +Now, let's label all namespaces to use the new revision and rollout all deployments so that their proxies connect to the new revision: + +```bash +kubectl --context ${CLUSTER1} get ns -l istio.io/rev=1-23 -o json | jq -r '.items[].metadata.name' | while read ns; do + kubectl --context ${CLUSTER1} label ns ${ns} istio.io/rev=1-23-0-patch1 --overwrite +done +kubectl --context ${CLUSTER2} get ns -l istio.io/rev=1-23 -o json | jq -r '.items[].metadata.name' | while read ns; do + kubectl --context ${CLUSTER2} label ns ${ns} istio.io/rev=1-23-0-patch1 --overwrite +done +kubectl --context ${CLUSTER1} -n httpbin patch deploy in-mesh --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"istio.io/rev\": \"1-23-0-patch1\" }}}}}" +kubectl --context ${CLUSTER1} -n clients patch deploy in-mesh-with-sidecar --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"istio.io/rev\": \"1-23-0-patch1\" }}}}}" +``` + + +Test that you can still access the `productpage` service through the Istio Ingress Gateway corresponding to the old revision using the command below: + +```bash +curl -k "https:///productpage" -I +``` + +You should get a response similar to the following one: + +``` +HTTP/2 200 +server: istio-envoy +date: Wed, 24 Aug 2022 14:58:22 GMT +content-type: application/json +content-length: 670 +access-control-allow-origin: * +access-control-allow-credentials: true +x-envoy-upstream-service-time: 7 +``` + + + +All good, so we can now configure the Istio gateway service(s) to use both revisions: + +```bash +kubectl --context ${CLUSTER1} -n istio-gateways patch svc istio-ingressgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER1} -n istio-gateways patch svc istio-eastwestgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER2} -n istio-gateways patch svc istio-ingressgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER2} -n istio-gateways patch svc istio-eastwestgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +``` + +We don't switch the selector directly from one the old revision to the new one to avoid any request to be dropped. + +Test that you can still access the `productpage` service: + +```bash +curl -k "https:///productpage" -I +``` + +You should get a response similar to the following one: + +``` +HTTP/2 200 +server: istio-envoy +date: Wed, 24 Aug 2022 14:58:22 GMT +content-type: application/json +content-length: 670 +access-control-allow-origin: * +access-control-allow-credentials: true +``` + + + + + +
Waypoints are upgraded automatically The waypoints are upgraded by Istiod's Gateway Controller, so if you check the status you will see that it is on the newest "1.23.0-patch1" version: @@ -3403,46 +3535,150 @@ describe("istio in place upgrades", function() { }); }); EOF -echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/waypoint-upgraded.test.js.liquid" +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/tests/waypoint-upgraded.test.js.liquid" timeout --signal=INT 1m mocha ./test.js --timeout 10000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> -Test that you can still access the `productpage` service through the Istio Ingress Gateway corresponding to the old revision using the command below: -```shell -curl -k "https:///productpage" -I + + +## Lab 15 - Helm Cleanup Istio Revision + +Everything is working well with the new version, we can uninstall the previous version. + +Let's start with the gateways + +```bash +helm uninstall istio-ingressgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} + +helm uninstall istio-eastwestgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} + +helm uninstall istio-ingressgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} + +helm uninstall istio-eastwestgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} ``` + + -```http,nocopy -HTTP/2 200 -server: istio-envoy -date: Wed, 24 Aug 2022 14:58:22 GMT -content-type: application/json -content-length: 670 -access-control-allow-origin: * -access-control-allow-credentials: true -x-envoy-upstream-service-time: 7 +And then the control plane: + +```bash +helm uninstall istiod-1-23 \ +--namespace istio-system \ +--kube-context=${CLUSTER1} + +helm uninstall istiod-1-23 \ +--namespace istio-system \ +--kube-context=${CLUSTER2} ``` + +Run the following command: + +```bash +kubectl --context ${CLUSTER1} -n istio-system get pods && kubectl --context ${CLUSTER1} -n istio-gateways get pods +``` + +You should get the following output: + +``` +NAME READY STATUS RESTARTS AGE +istiod-1-23-796fffbdf5-n6xc9 1/1 Running 0 25m +NAME READY STATUS RESTARTS AGE +istio-eastwestgateway-1-23-546446c77b-zg5hd 1/1 Running 0 25m +istio-ingressgateway-1-23-784f69b4bb-lcfk9 1/1 Running 0 25m +``` + +It confirms that only the new version is running. - -## Lab 14 - Ambient Egress Traffic with Waypoint +## Lab 16 - Ambient Egress Traffic with Waypoint In this lab, we'll explore how to control and secure outbound traffic from your Ambient Mesh using Waypoints. We'll start by restricting all outgoing traffic from a specific namespace, then set up a shared Waypoint to manage egress traffic centrally. This approach allows for consistent policy enforcement across multiple services and namespaces. @@ -3735,7 +3971,7 @@ kubectl --context ${CLUSTER1} delete authorizationpolicy httpbin -n egress -## Lab 15 - Waypoint Deployment Options +## Lab 17 - Waypoint Deployment Options This lab explores different ways to deploy Waypoints in Istio's Ambient Mesh. We'll learn about deploying Waypoints for services and for workloads. diff --git a/gloo-mesh/core/2-7/ambient/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/core/2-7/ambient/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..3fda068282 --- /dev/null +++ b/gloo-mesh/core/2-7/ambient/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,292 @@ +#!/usr/bin/env bash +set -o errexit + +number="2" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=mgmt +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh mgmt +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +kubectl config use-context ${MGMT} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["mgmt", "cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GLOO_MESH_VERSION=v2.7.0-beta1 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --set featureGates.insightsConfiguration=true \ + --version 2.7.0-beta1 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.7.0-beta1 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.7.0-beta1 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.7.0-beta1 \ + -f -< ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.7.0-beta1 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.7.0-beta1 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +describe("istio_version is at least 1.23.0", () => { + it("version should be at least 1.23.0", () => { + // Compare the string istio_version to the number 1.23.0 + // example 1.23.0-patch0 is valid, but 1.22.6 is not + let version = "1.24.1-patch1-distroless"; + let versionParts = version.split('-')[0].split('.'); + let major = parseInt(versionParts[0]); + let minor = parseInt(versionParts[1]); + let patch = parseInt(versionParts[2]); + let minMajor = 1; + let minMinor = 23; + let minPatch = 0; + expect(major).to.be.at.least(minMajor); + if (major === minMajor) { + expect(minor).to.be.at.least(minMinor); + if (minor === minMinor) { + expect(patch).to.be.at.least(minPatch); + } + } + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - </base \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.24.1-patch1-solo-distroless + multiCluster: + clusterName: cluster1 +profile: ambient +revision: 1-23 +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster1 +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER1} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - < + proxy: 1.24.1-patch1-solo-distroless +profile: ambient +revision: 1-23 +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster1 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.24.1-patch1-solo-distroless +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - < /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER1} apply -f -; } +kubectl --context ${CLUSTER2} get crd gateways.gateway.networking.k8s.io &> /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER2} apply -f -; } +helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/base \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.24.1-patch1-solo-distroless + multiCluster: + clusterName: cluster2 +profile: ambient +revision: 1-23 +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster2 +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER2} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - < + proxy: 1.24.1-patch1-solo-distroless +profile: ambient +revision: 1-23 +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster2 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.24.1-patch1-solo-distroless +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.24.1-patch1-solo-distroless \ +--create-namespace \ +-f - < /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER1} apply -f -; } +kubectl --context ${CLUSTER2} get crd gateways.gateway.networking.k8s.io &> /dev/null || \ + { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl --context ${CLUSTER2} apply -f -; } +cat <<'EOF' > ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio-injection=disabled +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio-injection=disabled +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml + +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 +kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml + +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio-injection=disabled +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio-injection=disabled +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions +kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + -f data/steps/deploy-bookinfo/reviews-v3.yaml +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} + +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl --context ${CLUSTER1} label namespace httpbin istio.io/dataplane-mode=ambient +kubectl --context ${CLUSTER1} label namespace httpbin istio.io/rev=1-23 +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns clients + +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("client apps", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh-with-sidecar", "in-ambient"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "clients", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/clients/deploy-clients/tests/check-clients.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); +const puppeteer = require('puppeteer'); +const chai = require('chai'); +const expect = chai.expect; +const GraphPage = require('./tests/pages/gloo-ui/graph-page'); +const { recognizeTextFromScreenshot } = require('./tests/utils/image-ocr-processor'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("graph page", function () { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let page; + let graphPage; + + beforeEach(async function () { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + page = await browser.newPage(); + graphPage = new GraphPage(page); + await Promise.all(Array.from({ length: 20 }, () => + helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 }))); + }); + + afterEach(async function () { + await browser.close(); + }); + + it("should show ingress gateway and product page", async function () { + await graphPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/graph`); + + // Select the clusters and namespaces so that the graph shows + await graphPage.selectClusters(['cluster1', 'cluster2']); + await graphPage.selectNamespaces(['istio-gateways', 'bookinfo-backends', 'bookinfo-frontends']); + // Disabling Cilium nodes due to this issue: https://github.com/solo-io/gloo-mesh-enterprise/issues/18623 + await graphPage.toggleLayoutSettings(); + await graphPage.disableCiliumNodes(); + await graphPage.toggleLayoutSettings(); + + // Capture a screenshot of the canvas and run text recognition + await graphPage.fullscreenGraph(); + await graphPage.centerGraph(); + const screenshotPath = 'ui-test-data/canvas.png'; + await graphPage.captureCanvasScreenshot(screenshotPath); + + const recognizedTexts = await recognizeTextFromScreenshot( + screenshotPath, + ["istio-ingressgateway", "productpage-v1", "details-v1", "ratings-v1", "reviews-v1", "reviews-v2"]); + + const flattenedRecognizedText = recognizedTexts.join(",").replace(/\n/g, ''); + console.log("Flattened recognized text:", flattenedRecognizedText); + + // Validate recognized texts + expect(flattenedRecognizedText).to.include("istio-ingressgateway"); + expect(flattenedRecognizedText).to.include("productpage-v1"); + expect(flattenedRecognizedText).to.include("details-v1"); + expect(flattenedRecognizedText).to.include("ratings-v1"); + expect(flattenedRecognizedText).to.include("reviews-v1"); + expect(flattenedRecognizedText).to.include("reviews-v2"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/graph-shows-traffic.test.js.liquid" +timeout --signal=INT 7m mocha ./test.js --timeout 120000 --retries=3 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const helpers = require('./tests/chai-http'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); + + it('should reject traffic to bookinfo-backends details', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Error fetching product details', + match: true + }) + }); + + it('should reject traffic to bookinfo-backends reviews', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Error fetching product reviews', + match: true + }) + }); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/authorization-policies/tests/bookinfo-backend-services-unavailable.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 60000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const helpers = require('./tests/chai-http'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); + + it('should admit traffic to bookinfo-backends details', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Book Details', + match: true + }) + }); + + it('should admit traffic to bookinfo-backends reviews', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Book Reviews', + match: true + }) + }); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/authorization-policies/tests/bookinfo-backend-services-available.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 60000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const helpers = require('./tests/chai-http'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); + + it('should reject traffic to bookinfo-backends details', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Error fetching product details', + match: true + }) + }); + + it('should reject traffic to bookinfo-backends reviews', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Error fetching product reviews', + match: true + }) + }); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/authorization-policies/tests/bookinfo-backend-services-unavailable.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 60000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const helpers = require('./tests/chai-http'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); + + it('should admit traffic to bookinfo-backends details', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Book Details', + match: true + }) + }); + + it('should admit traffic to bookinfo-backends reviews', () => { + return helpers.checkBody({ + host: `https://cluster1-bookinfo.example.com`, + path: '/productpage', + retCode: 200, + body: 'Book Reviews', + match: true + }) + }); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/authorization-policies/tests/bookinfo-backend-services-available.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 60000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy policy +for i in {1..20}; do curl -k "http://cluster1-bookinfo.example.com/productpage" -I; done +kubectl --context ${CLUSTER1} debug -n istio-system "$pod" -it --image=curlimages/curl -- curl http://localhost:15020/metrics | grep istio_request_ +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("L4 metrics available", function() { + it("ztunnel contains L4 and l7 metrics", () => { + let node = chaiExec(`kubectl --context ${process.env.CLUSTER1} -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].spec.nodeName}'`).stdout.replaceAll("'", ""); + let pods = JSON.parse(chaiExec(`kubectl --context ${process.env.CLUSTER1} -n istio-system get pods -l app=ztunnel -o json`).stdout).items; + let pod = ""; + pods.forEach(item => { + if(item.spec.nodeName == node) { + pod = item.metadata.name; + } + }); + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER1} -n istio-system debug ${pod} -it --image=curlimages/curl -- curl http://localhost:15020/metrics`); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain("istio_tcp_sent_bytes_total"); + expect(cli).output.to.contain("istio_requests_total"); + expect(cli).output.to.contain("istio_request_duration_milliseconds"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/l7-observability/tests/l4-l7-metrics-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context "${CLUSTER1}" -n istio-system logs ds/ztunnel +cat <<'EOF' > ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +var chai = require('chai'); +var expect = chai.expect; +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should displays BP0001 warning with text 'Globally scoped routing'", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("Globally scoped routing"))).to.be.true; + }); + + it("should have quick resource state filters", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + const healthy = await insightsPage.getHealthyResourcesCount(); + const warning = await insightsPage.getWarningResourcesCount(); + const error = await insightsPage.getErrorResourcesCount(); + expect(healthy).to.be.greaterThan(0); + expect(warning).to.be.greaterThan(0); + expect(error).to.be.a('number'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight BP0002 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx:1.25.3 --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*BP0002.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight BP0002 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "BP0002" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0002 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0002.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0001 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight CFG0001 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight CFG0001 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualservice reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete destinationrule reviews +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight SEC0008 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight SEC0008 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy reviews +kubectl --context ${CLUSTER1} -n istio-system delete peerauthentication default +helm upgrade --install istio-base oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/base \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.23.0-patch1-solo + multiCluster: + clusterName: cluster1 +profile: ambient +revision: 1-23-0-patch1 +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster1 +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < + proxy: 1.23.0-patch1-solo +profile: ambient +revision: 1-23-0-patch1 +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster1 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.23.0-patch1-solo +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </base \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </istiod \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < + proxy: + clusterDomain: cluster.local + tag: 1.23.0-patch1-solo + multiCluster: + clusterName: cluster2 +profile: ambient +revision: 1-23-0-patch1 +istio_cni: + enabled: true +meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + ISTIO_META_DNS_CAPTURE: "true" + trustDomain: cluster2 +pilot: + enabled: true + env: + PILOT_ENABLE_IP_AUTOALLOCATE: "true" + PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false" + PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true" +EOF + +helm upgrade --install istio-cni oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/cni \ +--namespace kube-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < + proxy: 1.23.0-patch1-solo +profile: ambient +revision: 1-23-0-patch1 +cni: + ambient: + dnsCapture: true + excludeNamespaces: + - istio-system + - kube-system +EOF + +helm upgrade --install ztunnel oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/ztunnel \ +--namespace istio-system \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < +istioNamespace: istio-system +multiCluster: + clusterName: cluster2 +namespace: istio-system +profile: ambient +proxy: + clusterDomain: cluster.local +tag: 1.23.0-patch1-solo +terminationGracePeriodSeconds: 29 +variant: distroless +EOF + +helm upgrade --install istio-ingressgateway-1-23-0-patch1 oci://us-docker.pkg.dev/gloo-mesh/istio-helm-/gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - </gateway \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} \ +--version 1.23.0-patch1-solo \ +--create-namespace \ +-f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 2 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 4 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 2 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 4 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-istio-helm/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./default/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} get ns -l istio.io/rev=1-23 -o json | jq -r '.items[].metadata.name' | while read ns; do + kubectl --context ${CLUSTER1} label ns ${ns} istio.io/rev=1-23-0-patch1 --overwrite +done +kubectl --context ${CLUSTER2} get ns -l istio.io/rev=1-23 -o json | jq -r '.items[].metadata.name' | while read ns; do + kubectl --context ${CLUSTER2} label ns ${ns} istio.io/rev=1-23-0-patch1 --overwrite +done +kubectl --context ${CLUSTER1} -n httpbin patch deploy in-mesh --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"istio.io/rev\": \"1-23-0-patch1\" }}}}}" +kubectl --context ${CLUSTER1} -n clients patch deploy in-mesh-with-sidecar --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"istio.io/rev\": \"1-23-0-patch1\" }}}}}" +kubectl --context ${CLUSTER1} -n httpbin rollout status deploy in-mesh +curl -k "https:///productpage" -I +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is accessible", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/../deploy-istio-helm/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n istio-gateways patch svc istio-ingressgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER1} -n istio-gateways patch svc istio-eastwestgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER2} -n istio-gateways patch svc istio-ingressgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +kubectl --context ${CLUSTER2} -n istio-gateways patch svc istio-eastwestgateway --type=json --patch '[{"op": "remove", "path": "/spec/selector/revision"}]' +curl -k "https:///productpage" -I +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is accessible", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/../deploy-istio-helm/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is accessible", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/../deploy-istio-helm/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +const chai = require("chai"); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("istio in place upgrades", function() { + const cluster1 = process.env.CLUSTER1; + it("should upgrade waypoints", () => { + let cli = chaiExec(`sh -c "istioctl --context ${cluster1} ps | grep waypoint"`); + expect(cli.stdout).to.contain("1.23.0-patch1"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-migrate-workloads-to-revision/tests/waypoint-upgraded.test.js.liquid" +timeout --signal=INT 1m mocha ./test.js --timeout 10000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +helm uninstall istio-ingressgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} + +helm uninstall istio-eastwestgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER1} + +helm uninstall istio-ingressgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} + +helm uninstall istio-eastwestgateway-1-23 \ +--namespace istio-gateways \ +--kube-context=${CLUSTER2} +kubectl --context ${CLUSTER1} -n istio-system get pods +kubectl --context ${CLUSTER2} -n istio-system get pods +kubectl --context ${CLUSTER1} -n istio-gateways get pods +kubectl --context ${CLUSTER2} -n istio-gateways get pods +ATTEMPTS=1 +until [[ $(kubectl --context ${CLUSTER1} -n istio-gateways get pods -l "istio.io/rev=1-23" -o json | jq '.items | length') -eq 0 ]] || [ $ATTEMPTS -gt 120 ]; do + printf "." + ATTEMPTS=$((ATTEMPTS + 1)) + sleep 1 +done +[ $ATTEMPTS -le 120 ] || kubectl --context ${CLUSTER1} -n istio-gateways get pods -l "istio.io/rev=1-23" + +ATTEMPTS=1 +until [[ $(kubectl --context ${CLUSTER2} -n istio-gateways get pods -l "istio.io/rev=1-23" -o json | jq '.items | length') -eq 0 ]] || [ $ATTEMPTS -gt 60 ]; do + printf "." + ATTEMPTS=$((ATTEMPTS + 1)) + sleep 1 +done +[ $ATTEMPTS -le 60 ] || kubectl --context ${CLUSTER2} -n istio-gateways get pods -l "istio.io/rev=1-23" +helm uninstall istiod-1-23 \ +--namespace istio-system \ +--kube-context=${CLUSTER1} + +helm uninstall istiod-1-23 \ +--namespace istio-system \ +--kube-context=${CLUSTER2} +ATTEMPTS=1 +until [[ $(kubectl --context ${CLUSTER1} -n istio-system get pods -l "istio.io/rev=1-23" -o json | jq '.items | length') -eq 0 ]] || [ $ATTEMPTS -gt 120 ]; do + printf "." + ATTEMPTS=$((ATTEMPTS + 1)) + sleep 1 +done +[ $ATTEMPTS -le 120 ] || kubectl --context ${CLUSTER1} -n istio-system get pods -l "istio.io/rev=1-23" +ATTEMPTS=1 +until [[ $(kubectl --context ${CLUSTER2} -n istio-system get pods -l "istio.io/rev=1-23" -o json | jq '.items | length') -eq 0 ]] || [ $ATTEMPTS -gt 60 ]; do + printf "." + ATTEMPTS=$((ATTEMPTS + 1)) + sleep 1 +done +[ $ATTEMPTS -le 60 ] || kubectl --context ${CLUSTER2} -n istio-system get pods -l "istio.io/rev=1-23" +kubectl --context ${CLUSTER1} -n istio-system get pods && kubectl --context ${CLUSTER1} -n istio-gateways get pods +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +describe("Old Istio version should be uninstalled", () => { + it("Pods aren't running anymore in CLUSTER1, namespace istio-system", () => { + let cli = chaiExec('kubectl --context ' + process.env.CLUSTER1 + ' -n istio-system get pods -l "istio.io/rev=' + process.env.OLD_REVISION +'" -o json'); + expect(cli).to.exit.with.code(0); + expect(JSON.parse(cli.stdout).items).to.have.lengthOf(0); + }); + it("Pods aren't running anymore in CLUSTER1, namespace istio-gateways", () => { + let cli = chaiExec('kubectl --context ' + process.env.CLUSTER1 + ' -n istio-gateways get pods -l "istio.io/rev=' + process.env.OLD_REVISION +'" -o json'); + expect(cli).to.exit.with.code(0); + expect(JSON.parse(cli.stdout).items).to.have.lengthOf(0); + }); + it("Pods aren't running anymore in CLUSTER2, namespace istio-system", () => { + let cli = chaiExec('kubectl --context ' + process.env.CLUSTER2 + ' -n istio-system get pods -l "istio.io/rev=' + process.env.OLD_REVISION +'" -o json'); + expect(cli).to.exit.with.code(0); + expect(JSON.parse(cli.stdout).items).to.have.lengthOf(0); + }); + it("Pods aren't running anymore in CLUSTER2, namespace istio-gateways", () => { + let cli = chaiExec('kubectl --context ' + process.env.CLUSTER2 + ' -n istio-gateways get pods -l "istio.io/rev=' + process.env.OLD_REVISION +'" -o json'); + expect(cli).to.exit.with.code(0); + expect(JSON.parse(cli.stdout).items).to.have.lengthOf(0); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/helm-cleanup-revision/tests/previous-version-uninstalled.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("egress traffic", function() { + const cluster = process.env.CLUSTER1 + + it(`virtual service should add customer header`, function() { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -s httpbin.org/get`; + let cli = chaiExec(command); + expect(cli.output.toLowerCase()).to.contain('my-added-header'); + }); + + it(`destination rule should route to https`, function() { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -s httpbin.org/get`; + let cli = chaiExec(command); + expect(cli.output.toLowerCase()).to.contain('https://httpbin.org/get'); + }); + + it(`other types of traffic (HTTP methods) should be rejected`, function() { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -s -I httpbin.org/get`; + let cli = chaiExec(command); + expect(cli.output).to.contain('403 Forbidden'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/waypoint-egress/tests/validate-egress-traffic.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 20000 --retries=60 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} delete authorizationpolicy httpbin -n egress +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("waypoint for service when ns is labeled", function() { + const cluster = process.env.CLUSTER1 + + it(`should redirect traffic for all services to the waypoint`, () => { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://ratings.bookinfo-backends:9080/ratings/0"`; + let cli = chaiExec(command); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('istio-envoy'); + + command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://reviews.bookinfo-backends:9080/reviews/0"`; + cli = chaiExec(command); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('istio-envoy'); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/waypoint-deployment-options/tests/validate-waypoint-for-service-ns.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 20000 --retries=10 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("service labeling to use a waypoint takes precedence over namespace labeling", function() { + const cluster = process.env.CLUSTER1 + + it(`should redirect traffic of labeled service through the waypoint and enforce the policy`, () => { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://ratings.bookinfo-backends:9080/ratings/0"`; + let cli = chaiExec(command); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('Forbidden'); + }); + + it(`should NOT redirect traffic of NON labeled services, which are redirected to the waypoint the namespace is configured for`, () => { + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://reviews.bookinfo-backends:9080/reviews/0"`; + let cli = chaiExec(command); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('istio-envoy'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/waypoint-deployment-options/tests/validate-waypoint-for-specific-service.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 20000 --retries=10 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} apply -f - < ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("waypoint for workloads when pod is labeled", function() { + const cluster = process.env.CLUSTER1 + + it(`should redirect traffic to waypoint`, () => { + let commandGetIP = `kubectl --context ${cluster} -n bookinfo-backends get pod -l app=ratings -o jsonpath='{.items[0].status.podIP}'`; + let cli = chaiExec(commandGetIP); + let podIP = cli.output.replace(/'/g, ''); + + let command = `kubectl --context ${cluster} -n clients exec deploy/in-ambient -- curl -v "http://${podIP}:9080/ratings/0"`; + cli = chaiExec(command); + + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('istio-envoy'); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/ambient/waypoint-deployment-options/tests/validate-waypoint-for-workload.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 20000 --retries=10 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends label pod -l app=ratings istio.io/use-waypoint- +kubectl --context ${CLUSTER1} -n bookinfo-backends label svc ratings istio.io/use-waypoint=ratings-waypoint +kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy deny-traffic-from-clients-ns +kubectl --context ${CLUSTER1} -n bookinfo-backends delete gateway waypoint ratings-waypoint ratings-workload-waypoint diff --git a/gloo-mesh/core/2-7/ambient/scripts/configure-domain-rewrite.sh b/gloo-mesh/core/2-7/ambient/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/core/2-7/ambient/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/core/2-7/ambient/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/core/2-7/ambient/scripts/register-domain.sh b/gloo-mesh/core/2-7/ambient/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/core/2-7/ambient/scripts/register-domain.sh +++ b/gloo-mesh/core/2-7/ambient/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/core/2-7/ambient/tests/chai-exec.js b/gloo-mesh/core/2-7/ambient/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/core/2-7/ambient/tests/chai-exec.js +++ b/gloo-mesh/core/2-7/ambient/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/core/2-7/ambient/tests/chai-http.js b/gloo-mesh/core/2-7/ambient/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/core/2-7/ambient/tests/chai-http.js +++ b/gloo-mesh/core/2-7/ambient/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/core/2-7/ambient/tests/proxies-changes.test.js.liquid b/gloo-mesh/core/2-7/ambient/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/core/2-7/ambient/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/core/2-7/default/README.md b/gloo-mesh/core/2-7/default/README.md index 08d53e4bdb..b461899083 100644 --- a/gloo-mesh/core/2-7/default/README.md +++ b/gloo-mesh/core/2-7/default/README.md @@ -15,7 +15,7 @@ source ./scripts/assert.sh ## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) * [Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-3---deploy-istio-using-gloo-mesh-lifecycle-manager-) * [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) @@ -68,7 +68,7 @@ You can find more information about Gloo Mesh Core in the official documentation -## Lab 1 - Deploy KinD clusters +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -81,14 +81,13 @@ export CLUSTER1=cluster1 export CLUSTER2=cluster2 ``` -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-aws.sh 1 mgmt -./scripts/deploy-aws.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws.sh 3 cluster2 us-west us-west-2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -99,27 +98,8 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: ``` CURRENT NAME CLUSTER AUTHINFO NAMESPACE @@ -138,7 +118,8 @@ cat <<'EOF' > ./test.js const helpers = require('./tests/chai-exec'); describe("Clusters are healthy", () => { - const clusters = [process.env.MGMT, process.env.CLUSTER1, process.env.CLUSTER2]; + const clusters = ["mgmt", "cluster1", "cluster2"]; + clusters.forEach(cluster => { it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); }); @@ -150,6 +131,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 2 - Deploy and register Gloo Mesh [VIDEO LINK](https://youtu.be/djfFiepK4GY "Video Link") @@ -190,6 +172,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -490,6 +473,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") @@ -724,7 +708,7 @@ spec: istioOperatorSpec: profile: minimal hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless namespace: istio-system values: global: @@ -766,7 +750,7 @@ spec: istioOperatorSpec: profile: empty hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -793,7 +777,7 @@ spec: istioOperatorSpec: profile: empty hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -829,7 +813,7 @@ spec: istioOperatorSpec: profile: minimal hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless namespace: istio-system values: global: @@ -871,7 +855,7 @@ spec: istioOperatorSpec: profile: empty hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -898,7 +882,7 @@ spec: istioOperatorSpec: profile: empty hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: diff --git a/gloo-mesh/core/2-7/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/core/2-7/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..31b0806b9b --- /dev/null +++ b/gloo-mesh/core/2-7/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,289 @@ +#!/usr/bin/env bash +set -o errexit + +number="2" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=mgmt +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh mgmt +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +kubectl config use-context ${MGMT} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["mgmt", "cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GLOO_MESH_VERSION=v2.7.0-beta1 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --set featureGates.insightsConfiguration=true \ + --version 2.7.0-beta1 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.7.0-beta1 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.7.0-beta1 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.7.0-beta1 \ + -f -< ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.7.0-beta1 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.7.0-beta1 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml + +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 +kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml + +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions +kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + -f data/steps/deploy-bookinfo/reviews-v3.yaml +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} + +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n httpbin get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ +--from-file=tls.key=tls.key \ +--from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); +const puppeteer = require('puppeteer'); +const chai = require('chai'); +const expect = chai.expect; +const GraphPage = require('./tests/pages/gloo-ui/graph-page'); +const { recognizeTextFromScreenshot } = require('./tests/utils/image-ocr-processor'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("graph page", function () { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let page; + let graphPage; + + beforeEach(async function () { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + page = await browser.newPage(); + graphPage = new GraphPage(page); + await Promise.all(Array.from({ length: 20 }, () => + helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 }))); + }); + + afterEach(async function () { + await browser.close(); + }); + + it("should show ingress gateway and product page", async function () { + await graphPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/graph`); + + // Select the clusters and namespaces so that the graph shows + await graphPage.selectClusters(['cluster1', 'cluster2']); + await graphPage.selectNamespaces(['istio-gateways', 'bookinfo-backends', 'bookinfo-frontends']); + // Disabling Cilium nodes due to this issue: https://github.com/solo-io/gloo-mesh-enterprise/issues/18623 + await graphPage.toggleLayoutSettings(); + await graphPage.disableCiliumNodes(); + await graphPage.toggleLayoutSettings(); + + // Capture a screenshot of the canvas and run text recognition + await graphPage.fullscreenGraph(); + await graphPage.centerGraph(); + const screenshotPath = 'ui-test-data/canvas.png'; + await graphPage.captureCanvasScreenshot(screenshotPath); + + const recognizedTexts = await recognizeTextFromScreenshot( + screenshotPath, + ["istio-ingressgateway", "productpage-v1", "details-v1", "ratings-v1", "reviews-v1", "reviews-v2"]); + + const flattenedRecognizedText = recognizedTexts.join(",").replace(/\n/g, ''); + console.log("Flattened recognized text:", flattenedRecognizedText); + + // Validate recognized texts + expect(flattenedRecognizedText).to.include("istio-ingressgateway"); + expect(flattenedRecognizedText).to.include("productpage-v1"); + expect(flattenedRecognizedText).to.include("details-v1"); + expect(flattenedRecognizedText).to.include("ratings-v1"); + expect(flattenedRecognizedText).to.include("reviews-v1"); + expect(flattenedRecognizedText).to.include("reviews-v2"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose-istio/tests/graph-shows-traffic.test.js.liquid" +timeout --signal=INT 7m mocha ./test.js --timeout 120000 --retries=3 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +var chai = require('chai'); +var expect = chai.expect; +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should displays BP0001 warning with text 'Globally scoped routing'", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("Globally scoped routing"))).to.be.true; + }); + + it("should have quick resource state filters", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + const healthy = await insightsPage.getHealthyResourcesCount(); + const warning = await insightsPage.getWarningResourcesCount(); + const error = await insightsPage.getErrorResourcesCount(); + expect(healthy).to.be.greaterThan(0); + expect(warning).to.be.greaterThan(0); + expect(error).to.be.a('number'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight BP0002 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx:1.25.3 --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*BP0002.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight BP0002 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "BP0002" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0002 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0002.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpersHttp = require('./tests/chai-http'); +const InsightsPage = require('./tests/pages/insights-page'); +const constants = require('./tests/pages/constants'); +const puppeteer = require('puppeteer'); +const { enhanceBrowser } = require('./tests/utils/enhance-browser'); +var chai = require('chai'); +var expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 4000); + } else { + done(); + } +}); + +describe("Insights UI", function() { + // UI tests often require a longer timeout. + // So here we force it to a minimum of 30 seconds. + const currentTimeout = this.timeout(); + this.timeout(Math.max(currentTimeout, 30000)); + + let browser; + let insightsPage; + + // Use Mocha's 'before' hook to set up Puppeteer + beforeEach(async function() { + browser = await puppeteer.launch({ + headless: "new", + slowMo: 40, + ignoreHTTPSErrors: true, + args: ['--no-sandbox', '--disable-setuid-sandbox'], + }); + browser = enhanceBrowser(browser, this.currentTest.title); + let page = await browser.newPage(); + await page.setViewport({ width: 1500, height: 1000 }); + insightsPage = new InsightsPage(page); + }); + + // Use Mocha's 'after' hook to close Puppeteer + afterEach(async function() { + await browser.close(); + }); + + it("should not display BP0001 in the UI", async () => { + await insightsPage.navigateTo(`http://${process.env.ENDPOINT_GLOO_MESH_UI}/insights`); + await insightsPage.selectClusters(['cluster1', 'cluster2']); + await insightsPage.selectInsightTypes([constants.InsightType.BP]); + const data = await insightsPage.getTableDataRows() + expect(data.some(item => item.includes("is not namespaced"))).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-intro/tests/insight-not-ui-BP0001.test.js.liquid" +timeout --signal=INT 5m mocha ./test.js --timeout 120000 --retries=20 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight CFG0001 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight CFG0001 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*CFG0001.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight CFG0001 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "CFG0001" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-config/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualservice reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete destinationrule reviews +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.not.be.null; + }); + + it("Insight SEC0008 has been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.true; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Insight generation", () => { + it("Insight SEC0008 has not been triggered in the source (MGMT)", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc gloo-mesh-mgmt-server -p '{"spec":{"ports": [{"port": 9094,"name":"http-insights"}]}}'` }); + helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh run debug --image=nginx: --context " + process.env.MGMT }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s http://gloo-mesh-mgmt-server.gloo-mesh:9094/metrics" }).replaceAll("'", ""); + const regex = /gloo_mesh_insights{.*SEC0008.*} 1/; + const match = command.match(regex); + expect(match).to.be.null; + }); + + it("Insight SEC0008 has not been triggered in PROMETHEUS", () => { + helpers.getOutputForCommand({ command: `kubectl --context ${process.env.MGMT} -n gloo-mesh patch svc prometheus-server -p '{"spec":{"ports": [{"port": 9090,"name":"http-metrics"}]}}'` }); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh exec debug -- curl -s 'http://prometheus-server.gloo-mesh:9090/api/v1/query?query=gloo_mesh_insights'" }).replaceAll("'", ""); + let result = JSON.parse(command); + let active = false; + result.data.result.forEach(item => { + if(item.metric.code == "SEC0008" && item.value[1] > 0) { + active = true + } + }); + expect(active).to.be.false; + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/insights-security/../insights-intro/tests/insight-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-backends delete authorizationpolicy reviews +kubectl --context ${CLUSTER1} -n istio-system delete peerauthentication default diff --git a/gloo-mesh/core/2-7/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/core/2-7/default/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/core/2-7/default/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/core/2-7/default/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/core/2-7/default/scripts/register-domain.sh b/gloo-mesh/core/2-7/default/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/core/2-7/default/scripts/register-domain.sh +++ b/gloo-mesh/core/2-7/default/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/core/2-7/default/tests/chai-exec.js b/gloo-mesh/core/2-7/default/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/core/2-7/default/tests/chai-exec.js +++ b/gloo-mesh/core/2-7/default/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/core/2-7/default/tests/chai-http.js b/gloo-mesh/core/2-7/default/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/core/2-7/default/tests/chai-http.js +++ b/gloo-mesh/core/2-7/default/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/core/2-7/default/tests/proxies-changes.test.js.liquid b/gloo-mesh/core/2-7/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/core/2-7/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/enterprise/2-5/airgap/default/README.md b/gloo-mesh/enterprise/2-5/airgap/default/README.md index 4af4f409cf..3d27c9c505 100644 --- a/gloo-mesh/enterprise/2-5/airgap/default/README.md +++ b/gloo-mesh/enterprise/2-5/airgap/default/README.md @@ -15,7 +15,7 @@ source ./scripts/assert.sh ## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Prepare airgap environment](#lab-2---prepare-airgap-environment-) * [Lab 3 - Deploy and register Gloo Mesh](#lab-3---deploy-and-register-gloo-mesh-) * [Lab 4 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-4---deploy-istio-using-gloo-mesh-lifecycle-manager-) @@ -69,7 +69,7 @@ You can find more information about Gloo Mesh Enterprise in the official documen -## Lab 1 - Deploy KinD clusters +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -82,14 +82,13 @@ export CLUSTER1=cluster1 export CLUSTER2=cluster2 ``` -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-aws-with-calico.sh 1 mgmt -./scripts/deploy-aws-with-calico.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws-with-calico.sh 3 cluster2 us-west us-west-2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -100,27 +99,8 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: ``` CURRENT NAME CLUSTER AUTHINFO NAMESPACE @@ -139,7 +119,8 @@ cat <<'EOF' > ./test.js const helpers = require('./tests/chai-exec'); describe("Clusters are healthy", () => { - const clusters = [process.env.MGMT, process.env.CLUSTER1, process.env.CLUSTER2]; + const clusters = ["mgmt", "cluster1", "cluster2"]; + clusters.forEach(cluster => { it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); }); @@ -151,6 +132,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 2 - Prepare airgap environment Set the registry variable: @@ -220,6 +202,8 @@ cat images.txt | while read image; do docker tag $id ${registry}/$dst_dev docker push ${registry}/$dst_dev done + +export otel_collector_image=$(curl --silent -X GET http://${registry}/v2/_catalog | jq -er '.repositories[] | select ((.|contains("otel-collector")) and (.|startswith("gloo-mesh/")))') ``` @@ -264,6 +248,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -310,7 +295,7 @@ redis: telemetryGateway: enabled: true image: - repository: ${registry}/gloo-mesh/gloo-otel-collector + repository: ${registry}/${otel_collector_image} service: type: LoadBalancer glooUi: @@ -327,7 +312,7 @@ glooUi: registry: ${registry}/gloo-mesh telemetryCollector: image: - repository: ${registry}/gloo-mesh/gloo-otel-collector + repository: ${registry}/${otel_collector_image} enabled: true config: exporters: @@ -467,7 +452,7 @@ glooAgent: registry: ${registry}/gloo-mesh telemetryCollector: image: - repository: ${registry}/gloo-mesh/gloo-otel-collector + repository: ${registry}/${otel_collector_image} enabled: true config: exporters: @@ -524,7 +509,7 @@ glooAgent: registry: ${registry}/gloo-mesh telemetryCollector: image: - repository: ${registry}/gloo-mesh/gloo-otel-collector + repository: ${registry}/${otel_collector_image} enabled: true config: exporters: @@ -591,6 +576,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 4 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") @@ -3559,6 +3545,8 @@ ATTEMPTS=0 while [ $ATTEMPTS -lt $MAX_ATTEMPTS ]; do kubectl --context ${CLUSTER1} -n gloo-mesh rollout restart deploy gloo-spire-server kubectl --context ${CLUSTER1} -n gloo-mesh rollout status deploy gloo-spire-server + sleep 30 + export JOIN_TOKEN=$(meshctl external-workload gen-token --kubecontext ${CLUSTER1} --trust-domain ${CLUSTER1} --ttl 3600 --ext-workload virtualmachines/${VM_APP} --plain=true | grep -ioE "${uuid_regex_partial}") timeout 1m docker exec vm1 meshctl ew onboard --install \ --attestor token \ diff --git a/gloo-mesh/enterprise/2-5/airgap/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/enterprise/2-5/airgap/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..3fda068282 --- /dev/null +++ b/gloo-mesh/enterprise/2-5/airgap/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,292 @@ +#!/usr/bin/env bash +set -o errexit + +number="2" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=mgmt +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh mgmt +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +kubectl config use-context ${MGMT} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["mgmt", "cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export registry=localhost:5000 +cat <<'EOF' > images.txt +docker.io/curlimages/curl +docker.io/alpine/openssl:3.3.1 +docker.io/bats/bats:v1.4.1 +docker.io/bitnami/postgresql:16.1.0-debian-11-r15 +docker.io/grafana/grafana:10.2.3 +docker.io/istio/examples-bookinfo-details-v1:1.20.2 +docker.io/istio/examples-bookinfo-productpage-v1:1.20.2 +docker.io/istio/examples-bookinfo-ratings-v1:1.20.2 +docker.io/istio/examples-bookinfo-ratings-v2:1.18.0 +docker.io/istio/examples-bookinfo-reviews-v1:1.20.2 +docker.io/istio/examples-bookinfo-reviews-v2:1.20.2 +docker.io/istio/examples-bookinfo-reviews-v3:1.20.2 +docker.io/kennethreitz/httpbin +docker.io/redis:7.2.4-alpine +gcr.io/gloo-mesh/ext-auth-service:0.56.10 +gcr.io/gloo-mesh/gloo-mesh-agent:2.5.12 +gcr.io/gloo-mesh/gloo-mesh-apiserver:2.5.12 +gcr.io/gloo-mesh/gloo-mesh-envoy:2.5.12 +gcr.io/gloo-mesh/gloo-mesh-mgmt-server:2.5.12 +gcr.io/gloo-mesh/gloo-mesh-spire-controller:2.5.12 +gcr.io/gloo-mesh/gloo-mesh-ui:2.5.12 +gcr.io/gloo-mesh/gloo-otel-collector:2.5.12 +gcr.io/gloo-mesh/rate-limiter:0.11.11 +ghcr.io/spiffe/spire-server:1.8.6 +quay.io/kiwigrid/k8s-sidecar:1.25.2 +quay.io/prometheus-operator/prometheus-config-reloader:v0.70.0 +quay.io/prometheus-operator/prometheus-config-reloader:v0.71.2 +quay.io/prometheus-operator/prometheus-operator:v0.70.0 +quay.io/prometheus/alertmanager:v0.26.0 +quay.io/prometheus/node-exporter:v1.7.0 +quay.io/prometheus/prometheus:v2.48.1 +quay.io/prometheus/prometheus:v2.49.1 +quay.io/solo-io/kubectl:1.16.4 +registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6 +registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.10.1 +us-docker.pkg.dev/gloo-mesh/istio-workshops/install-cni:1.20.2-solo +us-docker.pkg.dev/gloo-mesh/istio-workshops/operator:1.20.2-solo +us-docker.pkg.dev/gloo-mesh/istio-workshops/pilot:1.20.2-solo +us-docker.pkg.dev/gloo-mesh/istio-workshops/proxyv2:1.20.2-solo +EOF + +cat images.txt | while read image; do + nohup sh -c "echo $image | xargs -P10 -n1 docker pull" nohup.out 2>nohup.err & +done + +cat images.txt | while read image; do + src=$(echo $image | sed 's/^docker\.io\///g' | sed 's/^library\///g') + dst=$(echo $image | awk -F/ '{ if(NF>3){ print $3"/"$4}else{if(NF>2){ print $2"/"$3}else{if($1=="docker.io"){print $2}else{print $1"/"$2}}}}' | sed 's/^library\///g') + docker pull $image + + id=$(docker images $src --format "{{.ID}}") + + docker tag $id ${registry}/$dst + docker push ${registry}/$dst + dst_dev=$(echo ${dst} | sed 's/gloo-platform-dev/gloo-mesh/') + docker tag $id ${registry}/$dst_dev + docker push ${registry}/$dst_dev +done + +export otel_collector_image=$(curl --silent -X GET http://${registry}/v2/_catalog | jq -er '.repositories[] | select ((.|contains("otel-collector")) and (.|startswith("gloo-mesh/")))') +export GLOO_MESH_VERSION=v2.5.12 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.5.12 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 \ + -f -< ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=" + process.env.registry + "/curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=" + process.env.registry + "/curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +registry=localhost:5000 +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +sed -i'' -e "s/image: docker.io/image: ${registry}/g" \ + data/steps/deploy-bookinfo/productpage-v1.yaml \ + data/steps/deploy-bookinfo/details-v1.yaml \ + data/steps/deploy-bookinfo/ratings-v1.yaml \ + data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + data/steps/deploy-bookinfo/reviews-v3.yaml +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-20 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml + +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 +kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml + +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-20 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions +kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + -f data/steps/deploy-bookinfo/reviews-v3.yaml +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} + +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n httpbin get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER1} label namespace gloo-mesh-addons istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER2} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER2} label namespace gloo-mesh-addons istio.io/rev=1-20 --overwrite +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh-addons \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 \ + -f -< ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 deployment", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); +describe("Gloo Platform add-ons cluster2 deployment", () => { + let cluster = process.env.CLUSTER2 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-deployments.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 service", () => { + let cluster = process.env.CLUSTER1 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); +describe("Gloo Platform add-ons cluster2 service", () => { + let cluster = process.env.CLUSTER2 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-services.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=" + process.env.registry + "/curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const chaiHttp = require("chai-http"); +chai.use(chaiHttp); + +process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +let searchTest="Sorry, product reviews are currently unavailable for this book."; + +describe("Reviews shouldn't be available", () => { + it("Checking text '" + searchTest + "' in cluster1", async () => { + await chai.request(`https://cluster1-bookinfo.example.com`) + .get('/productpage') + .send() + .then((res) => { + expect(res.text).to.contain(searchTest); + }); + }); + +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/traffic-policies/tests/traffic-policies-reviews-unavailable.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete faultinjectionpolicy ratings-fault-injection +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable ratings +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete retrytimeoutpolicy reviews-request-timeout +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable reviews +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("cacerts secrets have been created", () => { + const clusters = [process.env.CLUSTER1, process.env.CLUSTER2]; + clusters.forEach(cluster => { + it('Secret is present in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "istio-system", k8sType: "secret", k8sObj: "cacerts" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/cacert-secrets-created.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +printf "Waiting for all pods needed for the test..." +printf "\n" +kubectl --context ${CLUSTER1} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER1} rollout status -n bookinfo-backends {} +kubectl --context ${CLUSTER2} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER2} rollout status -n bookinfo-backends {} +printf "\n" +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +const testerPodName = "tester-root-trust-policy"; +before(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh run --image=${process.env.registry}/alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh run --image=${process.env.registry}/alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + done(); +}); +after(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + done(); +}); + +describe("Certificate issued by Gloo Mesh", () => { + var expectedOutput = "i:O=gloo-mesh"; + + it('Gloo mesh is the organization for ' + process.env.CLUSTER1 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER1} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + + + it('Gloo mesh is the organization for ' + process.env.CLUSTER2 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER2} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/certificate-issued-by-gloo-mesh.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v1 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v2 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v1 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v2 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualdestination reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete failoverpolicy failover +kubectl --context ${CLUSTER1} -n bookinfo-backends delete outlierdetectionpolicy outlier-detection +(timeout 2s kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) || (kubectl --context ${CLUSTER1} -n httpbin rollout restart deploy/in-mesh && kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=" + process.env.registry + "/curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=" + process.env.registry + "/curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=" + process.env.registry + "/curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=" + process.env.registry + "/curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + + it("Response code shouldn't be 200 accessing ratings", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://ratings.bookinfo-backends:9080/ratings/0', timeout=3); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); + + it("Response code should be 200 accessing reviews with GET", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Response code should be 403 accessing reviews with HEAD", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.head('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); + + it("Response code should be 200 accessing details", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://details.bookinfo-backends:9080/details/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/bookinfo-access.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("kube-prometheus-stack deployments are ready", () => { + it('kube-prometheus-stack-kube-state-metrics pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-kube-state-metrics" })); + it('kube-prometheus-stack-grafana pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-grafana" })); + it('kube-prometheus-stack-operator pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-operator" })); +}); + +describe("kube-prometheus-stack daemonset is ready", () => { + it('kube-prometheus-stack-prometheus-node-exporter pods are ready', () => helpers.checkDaemonSet({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-prometheus-node-exporter" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/gloo-platform-observability/tests/grafana-installed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +PROD_PROMETHEUS_IP=$(kubectl get svc kube-prometheus-stack-prometheus -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --reuse-values \ + --version 2.5.12 \ + --values - < /vm/resolv.conf" +docker exec vm1 cp /vm/resolv.conf /etc/resolv.conf +docker exec vm1 apt update -y +docker exec vm1 apt-get install -y iputils-ping curl iproute2 iptables python3 sudo dnsutils +cluster1_cidr=$(kubectl --context ${CLUSTER1} -n kube-system get pod -l component=kube-controller-manager -o jsonpath='{.items[0].spec.containers[0].command}' | jq -r '.[] | select(. | startswith("--cluster-cidr="))' | cut -d= -f2) +cluster2_cidr=$(kubectl --context ${CLUSTER2} -n kube-system get pod -l component=kube-controller-manager -o jsonpath='{.items[0].spec.containers[0].command}' | jq -r '.[] | select(. | startswith("--cluster-cidr="))' | cut -d= -f2) + +docker exec vm1 $(kubectl --context ${CLUSTER1} get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{"'${cluster1_cidr}' via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}') +docker exec vm1 $(kubectl --context ${CLUSTER2} get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{"'${cluster2_cidr}' via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}') +docker cp $HOME/.gloo-mesh/bin/meshctl vm1:/usr/local/bin/ +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The VM should be able to access the productpage service", () => { + const command = 'docker exec vm1 curl -s -o /dev/null -w "%{http_code}" productpage.bookinfo-frontends.svc.cluster.local:9080/productpage'; + it("Got the expected status code 200", () => helpers.genericCommand({ command: command, responseContains: "200" })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/vm-access-productpage.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +docker exec -d vm1 python3 -m http.server 9999 +kubectl --context ${CLUSTER1} -n bookinfo-frontends exec $(kubectl --context ${CLUSTER1} -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -- python -c "import requests; r = requests.get('http://${VM_APP}.virtualmachines.ext.cluster.local:9999'); print(r.text)" +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should be able to access the VM", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://" + process.env.VM_APP + ".virtualmachines.ext.cluster.local:9999'); print(r.status_code)\""; + it('Got the expected status code 200', () => helpers.genericCommand({ command: command, responseContains: "200" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/productpage-access-vm.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +docker exec vm1 apt-get update +docker exec vm1 apt-get install -y mariadb-server +docker exec vm1 sed -i '/bind-address/c\bind-address = 0.0.0.0' /etc/mysql/mariadb.conf.d/50-server.cnf +docker exec vm1 systemctl start mysql + +docker exec -i vm1 mysql < ./test.js +const helpers = require('./tests/chai-http'); + +describe("The ratings service should use the database running on the VM", () => { + it('Got reviews v2 with ratings in cluster1', () => helpers.checkBody({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', body: 'text-black', match: true })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/ratings-using-vm.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n "${VM_NAMESPACE}" delete externalworkload ${VM_APP} +kubectl --context ${CLUSTER1} delete namespace "${VM_NAMESPACE}" +kubectl --context ${CLUSTER1} -n bookinfo-backends delete -f https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql-vm.yaml +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/ratings-v1 --replicas=1 +kubectl apply --context ${MGMT} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication not allowed", () => { + it("Productpage can NOT send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get', timeout=5); print(r.text)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("User-Agent"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send GET requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Productpage can't send POST requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.post('http://httpbin.org/post'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-only-get-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete networkpolicy restrict-egress +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete externalservice httpbin +kubectl --context ${CLUSTER1} -n istio-gateways delete accesspolicy allow-get-httpbin diff --git a/gloo-mesh/enterprise/2-5/airgap/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/enterprise/2-5/airgap/default/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/enterprise/2-5/airgap/default/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/enterprise/2-5/airgap/default/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/enterprise/2-5/airgap/default/scripts/register-domain.sh b/gloo-mesh/enterprise/2-5/airgap/default/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/enterprise/2-5/airgap/default/scripts/register-domain.sh +++ b/gloo-mesh/enterprise/2-5/airgap/default/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/enterprise/2-5/airgap/default/tests/chai-exec.js b/gloo-mesh/enterprise/2-5/airgap/default/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/enterprise/2-5/airgap/default/tests/chai-exec.js +++ b/gloo-mesh/enterprise/2-5/airgap/default/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/enterprise/2-5/airgap/default/tests/chai-http.js b/gloo-mesh/enterprise/2-5/airgap/default/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/enterprise/2-5/airgap/default/tests/chai-http.js +++ b/gloo-mesh/enterprise/2-5/airgap/default/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/enterprise/2-5/airgap/default/tests/proxies-changes.test.js.liquid b/gloo-mesh/enterprise/2-5/airgap/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/enterprise/2-5/airgap/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/enterprise/2-5/default/README.md b/gloo-mesh/enterprise/2-5/default/README.md index bd6a70431f..90346b0812 100644 --- a/gloo-mesh/enterprise/2-5/default/README.md +++ b/gloo-mesh/enterprise/2-5/default/README.md @@ -15,7 +15,7 @@ source ./scripts/assert.sh ## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) * [Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-3---deploy-istio-using-gloo-mesh-lifecycle-manager-) * [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) @@ -68,7 +68,7 @@ You can find more information about Gloo Mesh Enterprise in the official documen -## Lab 1 - Deploy KinD clusters +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -81,14 +81,13 @@ export CLUSTER1=cluster1 export CLUSTER2=cluster2 ``` -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-aws-with-calico.sh 1 mgmt -./scripts/deploy-aws-with-calico.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws-with-calico.sh 3 cluster2 us-west us-west-2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -99,27 +98,8 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: ``` CURRENT NAME CLUSTER AUTHINFO NAMESPACE @@ -138,7 +118,8 @@ cat <<'EOF' > ./test.js const helpers = require('./tests/chai-exec'); describe("Clusters are healthy", () => { - const clusters = [process.env.MGMT, process.env.CLUSTER1, process.env.CLUSTER2]; + const clusters = ["mgmt", "cluster1", "cluster2"]; + clusters.forEach(cluster => { it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); }); @@ -150,6 +131,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 2 - Deploy and register Gloo Mesh [VIDEO LINK](https://youtu.be/djfFiepK4GY "Video Link") @@ -190,6 +172,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -485,6 +468,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") @@ -3411,6 +3395,8 @@ ATTEMPTS=0 while [ $ATTEMPTS -lt $MAX_ATTEMPTS ]; do kubectl --context ${CLUSTER1} -n gloo-mesh rollout restart deploy gloo-spire-server kubectl --context ${CLUSTER1} -n gloo-mesh rollout status deploy gloo-spire-server + sleep 30 + export JOIN_TOKEN=$(meshctl external-workload gen-token --kubecontext ${CLUSTER1} --trust-domain ${CLUSTER1} --ttl 3600 --ext-workload virtualmachines/${VM_APP} --plain=true | grep -ioE "${uuid_regex_partial}") timeout 1m docker exec vm1 meshctl ew onboard --install \ --attestor token \ diff --git a/gloo-mesh/enterprise/2-5/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/enterprise/2-5/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..3fda068282 --- /dev/null +++ b/gloo-mesh/enterprise/2-5/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,292 @@ +#!/usr/bin/env bash +set -o errexit + +number="2" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=mgmt +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh mgmt +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +kubectl config use-context ${MGMT} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["mgmt", "cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GLOO_MESH_VERSION=v2.5.12 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.5.12 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 \ + -f -< ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-20 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml + +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 +kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml + +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-20 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions +kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + -f data/steps/deploy-bookinfo/reviews-v3.yaml +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} + +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n httpbin get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER1} label namespace gloo-mesh-addons istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER2} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER2} label namespace gloo-mesh-addons istio.io/rev=1-20 --overwrite +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh-addons \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 \ + -f -< ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 deployment", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); +describe("Gloo Platform add-ons cluster2 deployment", () => { + let cluster = process.env.CLUSTER2 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-deployments.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 service", () => { + let cluster = process.env.CLUSTER1 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); +describe("Gloo Platform add-ons cluster2 service", () => { + let cluster = process.env.CLUSTER2 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-services.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const chaiHttp = require("chai-http"); +chai.use(chaiHttp); + +process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +let searchTest="Sorry, product reviews are currently unavailable for this book."; + +describe("Reviews shouldn't be available", () => { + it("Checking text '" + searchTest + "' in cluster1", async () => { + await chai.request(`https://cluster1-bookinfo.example.com`) + .get('/productpage') + .send() + .then((res) => { + expect(res.text).to.contain(searchTest); + }); + }); + +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/traffic-policies/tests/traffic-policies-reviews-unavailable.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete faultinjectionpolicy ratings-fault-injection +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable ratings +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete retrytimeoutpolicy reviews-request-timeout +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable reviews +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("cacerts secrets have been created", () => { + const clusters = [process.env.CLUSTER1, process.env.CLUSTER2]; + clusters.forEach(cluster => { + it('Secret is present in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "istio-system", k8sType: "secret", k8sObj: "cacerts" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/cacert-secrets-created.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +printf "Waiting for all pods needed for the test..." +printf "\n" +kubectl --context ${CLUSTER1} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER1} rollout status -n bookinfo-backends {} +kubectl --context ${CLUSTER2} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER2} rollout status -n bookinfo-backends {} +printf "\n" +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +const testerPodName = "tester-root-trust-policy"; +before(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + done(); +}); +after(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + done(); +}); + +describe("Certificate issued by Gloo Mesh", () => { + var expectedOutput = "i:O=gloo-mesh"; + + it('Gloo mesh is the organization for ' + process.env.CLUSTER1 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER1} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + + + it('Gloo mesh is the organization for ' + process.env.CLUSTER2 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER2} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/certificate-issued-by-gloo-mesh.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v1 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v2 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v1 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v2 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualdestination reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete failoverpolicy failover +kubectl --context ${CLUSTER1} -n bookinfo-backends delete outlierdetectionpolicy outlier-detection +(timeout 2s kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) || (kubectl --context ${CLUSTER1} -n httpbin rollout restart deploy/in-mesh && kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + + it("Response code shouldn't be 200 accessing ratings", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://ratings.bookinfo-backends:9080/ratings/0', timeout=3); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); + + it("Response code should be 200 accessing reviews with GET", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Response code should be 403 accessing reviews with HEAD", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.head('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); + + it("Response code should be 200 accessing details", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://details.bookinfo-backends:9080/details/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/bookinfo-access.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("kube-prometheus-stack deployments are ready", () => { + it('kube-prometheus-stack-kube-state-metrics pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-kube-state-metrics" })); + it('kube-prometheus-stack-grafana pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-grafana" })); + it('kube-prometheus-stack-operator pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-operator" })); +}); + +describe("kube-prometheus-stack daemonset is ready", () => { + it('kube-prometheus-stack-prometheus-node-exporter pods are ready', () => helpers.checkDaemonSet({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-prometheus-node-exporter" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/gloo-platform-observability/tests/grafana-installed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +PROD_PROMETHEUS_IP=$(kubectl get svc kube-prometheus-stack-prometheus -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --reuse-values \ + --version 2.5.12 \ + --values - < /vm/resolv.conf" +docker exec vm1 cp /vm/resolv.conf /etc/resolv.conf +docker exec vm1 apt update -y +docker exec vm1 apt-get install -y iputils-ping curl iproute2 iptables python3 sudo dnsutils +cluster1_cidr=$(kubectl --context ${CLUSTER1} -n kube-system get pod -l component=kube-controller-manager -o jsonpath='{.items[0].spec.containers[0].command}' | jq -r '.[] | select(. | startswith("--cluster-cidr="))' | cut -d= -f2) +cluster2_cidr=$(kubectl --context ${CLUSTER2} -n kube-system get pod -l component=kube-controller-manager -o jsonpath='{.items[0].spec.containers[0].command}' | jq -r '.[] | select(. | startswith("--cluster-cidr="))' | cut -d= -f2) + +docker exec vm1 $(kubectl --context ${CLUSTER1} get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{"'${cluster1_cidr}' via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}') +docker exec vm1 $(kubectl --context ${CLUSTER2} get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{"'${cluster2_cidr}' via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}') +docker cp $HOME/.gloo-mesh/bin/meshctl vm1:/usr/local/bin/ +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The VM should be able to access the productpage service", () => { + const command = 'docker exec vm1 curl -s -o /dev/null -w "%{http_code}" productpage.bookinfo-frontends.svc.cluster.local:9080/productpage'; + it("Got the expected status code 200", () => helpers.genericCommand({ command: command, responseContains: "200" })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/vm-access-productpage.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +docker exec -d vm1 python3 -m http.server 9999 +kubectl --context ${CLUSTER1} -n bookinfo-frontends exec $(kubectl --context ${CLUSTER1} -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -- python -c "import requests; r = requests.get('http://${VM_APP}.virtualmachines.ext.cluster.local:9999'); print(r.text)" +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should be able to access the VM", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://" + process.env.VM_APP + ".virtualmachines.ext.cluster.local:9999'); print(r.status_code)\""; + it('Got the expected status code 200', () => helpers.genericCommand({ command: command, responseContains: "200" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/productpage-access-vm.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +docker exec vm1 apt-get update +docker exec vm1 apt-get install -y mariadb-server +docker exec vm1 sed -i '/bind-address/c\bind-address = 0.0.0.0' /etc/mysql/mariadb.conf.d/50-server.cnf +docker exec vm1 systemctl start mysql + +docker exec -i vm1 mysql < ./test.js +const helpers = require('./tests/chai-http'); + +describe("The ratings service should use the database running on the VM", () => { + it('Got reviews v2 with ratings in cluster1', () => helpers.checkBody({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', body: 'text-black', match: true })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/ratings-using-vm.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n "${VM_NAMESPACE}" delete externalworkload ${VM_APP} +kubectl --context ${CLUSTER1} delete namespace "${VM_NAMESPACE}" +kubectl --context ${CLUSTER1} -n bookinfo-backends delete -f https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql-vm.yaml +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/ratings-v1 --replicas=1 +kubectl apply --context ${MGMT} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication not allowed", () => { + it("Productpage can NOT send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get', timeout=5); print(r.text)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("User-Agent"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send GET requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Productpage can't send POST requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.post('http://httpbin.org/post'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-only-get-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete networkpolicy restrict-egress +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete externalservice httpbin +kubectl --context ${CLUSTER1} -n istio-gateways delete accesspolicy allow-get-httpbin diff --git a/gloo-mesh/enterprise/2-5/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/enterprise/2-5/default/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/enterprise/2-5/default/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/enterprise/2-5/default/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/enterprise/2-5/default/scripts/register-domain.sh b/gloo-mesh/enterprise/2-5/default/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/enterprise/2-5/default/scripts/register-domain.sh +++ b/gloo-mesh/enterprise/2-5/default/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/enterprise/2-5/default/tests/chai-exec.js b/gloo-mesh/enterprise/2-5/default/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/enterprise/2-5/default/tests/chai-exec.js +++ b/gloo-mesh/enterprise/2-5/default/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/enterprise/2-5/default/tests/chai-http.js b/gloo-mesh/enterprise/2-5/default/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/enterprise/2-5/default/tests/chai-http.js +++ b/gloo-mesh/enterprise/2-5/default/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/enterprise/2-5/default/tests/proxies-changes.test.js.liquid b/gloo-mesh/enterprise/2-5/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/enterprise/2-5/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/enterprise/2-5/gitops/default/README.md b/gloo-mesh/enterprise/2-5/gitops/default/README.md index d35124ac73..6ba435f37c 100644 --- a/gloo-mesh/enterprise/2-5/gitops/default/README.md +++ b/gloo-mesh/enterprise/2-5/gitops/default/README.md @@ -15,7 +15,7 @@ source ./scripts/assert.sh ## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Deploy Gitea](#lab-2---deploy-gitea-) * [Lab 3 - Deploy Argo CD](#lab-3---deploy-argo-cd-) * [Lab 4 - Deploy and register Gloo Mesh](#lab-4---deploy-and-register-gloo-mesh-) @@ -70,7 +70,7 @@ You can find more information about Gloo Mesh Enterprise in the official documen -## Lab 1 - Deploy KinD clusters +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -83,14 +83,13 @@ export CLUSTER1=cluster1 export CLUSTER2=cluster2 ``` -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-aws-with-calico.sh 1 mgmt -./scripts/deploy-aws-with-calico.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws-with-calico.sh 3 cluster2 us-west us-west-2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -101,27 +100,8 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: ``` CURRENT NAME CLUSTER AUTHINFO NAMESPACE @@ -140,7 +120,8 @@ cat <<'EOF' > ./test.js const helpers = require('./tests/chai-exec'); describe("Clusters are healthy", () => { - const clusters = [process.env.MGMT, process.env.CLUSTER1, process.env.CLUSTER2]; + const clusters = ["mgmt", "cluster1", "cluster2"]; + clusters.forEach(cluster => { it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); }); @@ -152,6 +133,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 2 - Deploy Gitea GitOps is a DevOps automation technique based on Git. To implement it, your processes depend @@ -1191,6 +1173,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 5 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") @@ -5074,6 +5057,8 @@ ATTEMPTS=0 while [ $ATTEMPTS -lt $MAX_ATTEMPTS ]; do kubectl --context ${CLUSTER1} -n gloo-mesh rollout restart deploy gloo-spire-server kubectl --context ${CLUSTER1} -n gloo-mesh rollout status deploy gloo-spire-server + sleep 30 + export JOIN_TOKEN=$(meshctl external-workload gen-token --kubecontext ${CLUSTER1} --trust-domain ${CLUSTER1} --ttl 3600 --ext-workload virtualmachines/${VM_APP} --plain=true | grep -ioE "${uuid_regex_partial}") timeout 1m docker exec vm1 meshctl ew onboard --install \ --attestor token \ diff --git a/gloo-mesh/enterprise/2-5/gitops/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/enterprise/2-5/gitops/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..3fda068282 --- /dev/null +++ b/gloo-mesh/enterprise/2-5/gitops/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,292 @@ +#!/usr/bin/env bash +set -o errexit + +number="2" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=mgmt +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh mgmt +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +kubectl config use-context ${MGMT} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["mgmt", "cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GITEA_HTTP=http://git.example.com:3180 + +helm upgrade --install gitea gitea \ + --repo https://dl.gitea.com/charts/ \ + --version 10.4.1 \ + --kube-context ${MGMT} \ + --namespace gitea \ + --create-namespace \ + --wait \ + -f -< ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Gitea load balancer IP address", () => { + it("is assigned", () => { + let cli = chaiExec("kubectl --context " + process.env.MGMT + " -n gitea get svc gitea-http -o jsonpath='{.status.loadBalancer}'"); + expect(cli).to.exit.with.code(0); + expect(cli).output.to.contain('"ingress"'); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-hosted-git/tests/get-gitea-http-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +GITEA_ADMIN_TOKEN=$(curl -Ss ${GITEA_HTTP}/api/v1/users/gitea_admin/tokens \ + -H "Content-Type: application/json" \ + -d '{"name": "workshop", "scopes": ["write:admin", "write:repository"]}' \ + -u 'gitea_admin:r8sA8CPHD9!bt6d' \ + | jq -r .sha1) +echo export GITEA_ADMIN_TOKEN=${GITEA_ADMIN_TOKEN} >> ~/.env + +curl -i ${GITEA_HTTP}/api/v1/admin/users \ + -H "accept: application/json" -H "Content-Type: application/json" \ + -H "Authorization: token ${GITEA_ADMIN_TOKEN}" \ + -d '{ + "username": "gloo-gitops", + "password": "password", + "email": "gloo-gitops@solo.io", + "full_name": "Solo.io GitOps User", + "must_change_password": false + }' +ARGOCD_WEBHOOK_SECRET=$(shuf -ern32 {A..Z} {a..z} {0..9} | paste -sd "\0" -) + +helm upgrade --install argo-cd argo-cd \ + --repo https://argoproj.github.io/argo-helm \ + --version 7.5.2 \ + --kube-context ${MGMT} \ + --namespace argocd \ + --create-namespace \ + --wait \ + -f -< ${GITOPS_ARGOCD}/argo-cd.yaml +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + name: argo-cd + annotations: + argocd.argoproj.io/sync-wave: "-1" + finalizers: + - resources-finalizer.argocd.argoproj.io +spec: + sourceRepos: + - '*' + destinations: + - namespace: '*' + server: '*' + clusterResourceWhitelist: + - group: '*' + kind: '*' +--- +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: argocd-${MGMT} + finalizers: + - resources-finalizer.argocd.argoproj.io/background +spec: + project: argo-cd + sources: + - repoURL: ${GITEA_HTTP}/gloo-gitops/gitops-repo.git + targetRevision: HEAD + path: argo-cd + destination: + name: ${MGMT} + namespace: argocd + syncPolicy: + automated: + allowEmpty: true + prune: true + syncOptions: + - ApplyOutOfSyncOnly=true +EOF + +kubectl --context ${MGMT} -n argocd create -f ${GITOPS_ARGOCD}/argo-cd.yaml +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Manage argo-cd config" +git -C ${GITOPS_REPO_LOCAL} push +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Argo CD config", () => { + it("syncs to mgmt cluster", () => { + let cli = chaiExec(process.env.HOME + "/bin/argocd --kube-context " + process.env.MGMT + " app get argocd-" + process.env.MGMT); + expect(cli).to.exit.with.code(0); + expect(cli).to.have.output.that.matches(new RegExp("\\bServer:\\s+" + process.env.MGMT + "\\b")); + expect(cli).to.have.output.that.matches(new RegExp("\\bRepo:\\s+" + process.env.GITEA_HTTP + "/gloo-gitops/gitops-repo.git\\b")); + expect(cli).to.have.output.that.matches(new RegExp("\\bPath:\\s+argo-cd\\b")); + expect(cli).to.have.output.that.matches(new RegExp("\\bHealth Status:\\s+Healthy\\b")); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-argo-cd/tests/argo-cd-sync-repo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -i ${GITEA_HTTP}/api/v1/repos/gloo-gitops/gitops-repo/hooks \ + -H "accept: application/json" -H "Content-Type: application/json" \ + -H "Authorization: token ${GITEA_ADMIN_TOKEN}" \ + -d '{ + "active": true, + "type": "gitea", + "branch_filter": "*", + "config": { + "content_type": "json", + "url": "'http://${ARGOCD_HTTP_IP}:3280/api/webhook'", + "secret": "'${ARGOCD_WEBHOOK_SECRET}'" + }, + "events": [ + "push" + ] + }' +cat < ${GITOPS_ARGOCD}/nginx.yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx + namespace: default +spec: + containers: + - image: nginx:1.25.3 + name: nginx +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Add nginx" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n default get pod nginx 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +timeout 2m bash -c "until [[ \$(kubectl --context ${MGMT} -n default wait --for=condition=ready pod/nginx --timeout=30s 2>/dev/null) ]]; do + sleep 1 +done" +if [[ ! $(kubectl --context ${MGMT} -n default wait --for=condition=ready pod/nginx --timeout=30s) ]]; then + echo "nginx did not become ready" + exit 1 +fi +until kubectl --context ${MGMT} -n default wait --for=condition=ready pod/nginx --timeout=30s 2>/dev/null; do sleep 1; done +git -C ${GITOPS_REPO_LOCAL} revert --no-commit HEAD +git -C ${GITOPS_REPO_LOCAL} commit -m "Delete nginx" +git -C ${GITOPS_REPO_LOCAL} push + +kubectl --context ${MGMT} -n default wait --for=delete pod/nginx --timeout=30s +export GLOO_MESH_VERSION=v2.5.12 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GITOPS_PLATFORM=${GITOPS_REPO_LOCAL}/platform +mkdir -p ${GITOPS_PLATFORM}/${MGMT} +cat < ${GITOPS_ARGOCD}/platform.yaml +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + name: platform + annotations: + argocd.argoproj.io/sync-wave: "-1" + finalizers: + - resources-finalizer.argocd.argoproj.io +spec: + sourceRepos: + - '*' + destinations: + - namespace: '*' + server: '*' + clusterResourceWhitelist: + - group: '*' + kind: '*' +--- +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: platform +spec: + generators: + - list: + elements: + - cluster: ${MGMT} + - cluster: ${CLUSTER1} + - cluster: ${CLUSTER2} + template: + metadata: + name: platform-{{cluster}} + finalizers: + - resources-finalizer.argocd.argoproj.io/background + spec: + project: platform + source: + repoURL: ${GITEA_HTTP}/gloo-gitops/gitops-repo.git + targetRevision: HEAD + path: platform/{{cluster}} + destination: + name: '{{cluster}}' + namespace: default + syncPolicy: + automated: + allowEmpty: true + prune: true + syncOptions: + - ApplyOutOfSyncOnly=true +EOF +mkdir -p ${GITOPS_PLATFORM}/argo-cd + +cat < ${GITOPS_PLATFORM}/argo-cd/gloo-platform-mgmt-installation.yaml +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: gloo-platform-mgmt-installation + annotations: + argocd.argoproj.io/sync-wave: "0" + finalizers: + - resources-finalizer.argocd.argoproj.io/background +spec: + project: platform + destination: + name: ${MGMT} + namespace: gloo-mesh + syncPolicy: + automated: + allowEmpty: true + prune: true + syncOptions: + - CreateNamespace=true + ignoreDifferences: + - kind: Secret + jsonPointers: + - /data/ca.crt + - /data/tls.crt + - /data/tls.key + - /data/token + - group: certificate.cert-manager.io + kind: Certificate + jsonPointers: + - /spec/duration + - /spec/renewBefore + sources: + - chart: gloo-platform-crds + repoURL: https://storage.googleapis.com/gloo-platform/helm-charts + targetRevision: 2.5.12 + helm: + releaseName: gloo-platform-crds + parameters: + - name: "featureGates.ExternalWorkloads" + value: "true" + - chart: gloo-platform + repoURL: https://storage.googleapis.com/gloo-platform/helm-charts + targetRevision: 2.5.12 + helm: + releaseName: gloo-platform + valueFiles: + - \$values/platform/argo-cd/gloo-platform-mgmt-installation-values.yaml + - repoURL: http://$(kubectl --context ${MGMT} -n gitea get svc gitea-http -o jsonpath='{.status.loadBalancer.ingress[0].*}'):3180/gloo-gitops/gitops-repo.git + targetRevision: HEAD + ref: values +EOF +cat < ${GITOPS_PLATFORM}/argo-cd/gloo-platform-mgmt-installation-values.yaml +licensing: + glooTrialLicenseKey: ${GLOO_MESH_LICENSE_KEY} +common: + cluster: mgmt +glooInsightsEngine: + enabled: false +glooMgmtServer: + enabled: true + ports: + healthcheck: 8091 +prometheus: + enabled: true + skipAutoMigration: true +redis: + deployment: + enabled: true +telemetryGateway: + enabled: true + service: + type: LoadBalancer +glooUi: + enabled: true + serviceType: LoadBalancer +telemetryCollector: + enabled: true + config: + exporters: + otlp: + endpoint: gloo-telemetry-gateway:4317 +featureGates: + ExternalWorkloads: true +EOF +cat <${GITOPS_PLATFORM}/argo-cd/kustomization.yaml +namespace: argocd +resources: +- gloo-platform-mgmt-installation.yaml +EOF + +cat <${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml +resources: +- ../argo-cd +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Gloo Platform management server" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n argocd get application gloo-platform-mgmt-installation 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +timeout 2m bash -c "until [[ \$(kubectl --context ${MGMT} -n gloo-mesh rollout status deploy/gloo-mesh-mgmt-server 2>/dev/null) ]]; do + sleep 1 +done" +if [[ ! $(kubectl --context ${MGMT} -n gloo-mesh rollout status deploy/gloo-mesh-mgmt-server --timeout 10s) ]]; then + echo "Gloo Mesh Management Server did not deploy" + exit 1 +fi +until kubectl --context ${MGMT} -n gloo-mesh rollout status deploy/gloo-mesh-mgmt-server 2>/dev/null; do sleep 1; done +kubectl wait --context ${MGMT} --for=condition=Ready -n gloo-mesh --all pod +timeout 2m bash -c "until [[ \$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o json | jq '.status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +cat <<'EOF' > ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat < ${GITOPS_PLATFORM}/${MGMT}/cluster1.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: KubernetesCluster +metadata: + name: cluster1 + namespace: gloo-mesh +spec: + clusterDomain: cluster.local +EOF + +cat < ${GITOPS_PLATFORM}/${MGMT}/cluster2.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: KubernetesCluster +metadata: + name: cluster2 + namespace: gloo-mesh +spec: + clusterDomain: cluster.local +EOF + +cat <>${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml +- cluster1.yaml +- cluster2.yaml +EOF +mkdir -p ${GITOPS_PLATFORM}/${CLUSTER1} + +cat <${GITOPS_PLATFORM}/${CLUSTER1}/ns-gloo-mesh.yaml +apiVersion: v1 +kind: Namespace +metadata: + name: gloo-mesh +EOF + +cat <${GITOPS_PLATFORM}/${CLUSTER1}/relay-secrets.yaml +apiVersion: v1 +kind: Secret +metadata: + name: relay-root-tls-secret + namespace: gloo-mesh +data: + ca.crt: $(kubectl --context ${MGMT} -n gloo-mesh get secret relay-root-tls-secret -o jsonpath='{.data.ca\.crt}') +--- +apiVersion: v1 +kind: Secret +metadata: + name: relay-identity-token-secret + namespace: gloo-mesh +data: + token: $(kubectl --context ${MGMT} -n gloo-mesh get secret relay-identity-token-secret -o jsonpath='{.data.token}') +EOF + +cat <${GITOPS_PLATFORM}/${CLUSTER1}/kustomization.yaml +commonAnnotations: + argocd.argoproj.io/sync-wave: "1" +resources: +- ns-gloo-mesh.yaml +- relay-secrets.yaml +EOF +cp -r ${GITOPS_PLATFORM}/${CLUSTER1} ${GITOPS_PLATFORM}/${CLUSTER2} +cat <${GITOPS_PLATFORM}/argo-cd/gloo-platform-agents-installation.yaml +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: gloo-platform-agents-installation +spec: + generators: + - list: + elements: + - cluster: ${CLUSTER1} + - cluster: ${CLUSTER2} + template: + metadata: + name: gloo-platform-{{cluster}}-installation + annotations: + argocd.argoproj.io/sync-wave: "2" + finalizers: + - resources-finalizer.argocd.argoproj.io/background + spec: + project: platform + destination: + name: '{{cluster}}' + namespace: gloo-mesh + syncPolicy: + automated: + prune: true + ignoreDifferences: + - group: apiextensions.k8s.io + kind: CustomResourceDefinition + name: istiooperators.install.istio.io + jsonPointers: + - /metadata/labels + - kind: Secret + name: postgresql + jsonPointers: + - /data/postgres-password + - group: certificate.cert-manager.io + kind: Certificate + jsonPointers: + - /spec/duration + - /spec/renewBefore + sources: + - chart: gloo-platform-crds + repoURL: https://storage.googleapis.com/gloo-platform/helm-charts + targetRevision: 2.5.12 + helm: + releaseName: gloo-platform-crds + parameters: + - name: "featureGates.ExternalWorkloads" + value: "true" + - chart: gloo-platform + repoURL: https://storage.googleapis.com/gloo-platform/helm-charts + targetRevision: 2.5.12 + helm: + releaseName: gloo-platform + valueFiles: + - \$values/platform/argo-cd/gloo-platform-agents-installation-values.yaml + parameters: + - name: common.cluster + value: '{{cluster}}' + - name: "glooSpireServer.server.trustDomain" + value: '{{cluster}}' + - repoURL: http://$(kubectl --context ${MGMT} -n gitea get svc gitea-http -o jsonpath='{.status.loadBalancer.ingress[0].*}'):3180/gloo-gitops/gitops-repo.git + targetRevision: HEAD + ref: values +EOF +cat < ${GITOPS_PLATFORM}/argo-cd/gloo-platform-agents-installation-values.yaml +common: + cluster: undefined +glooAgent: + enabled: true + relay: + serverAddress: "${ENDPOINT_GLOO_MESH}" + authority: gloo-mesh-mgmt-server.gloo-mesh +telemetryCollector: + enabled: true + config: + exporters: + otlp: + endpoint: "${ENDPOINT_TELEMETRY_GATEWAY}" +EOF +cat <>${GITOPS_PLATFORM}/argo-cd/kustomization.yaml +- gloo-platform-agents-installation.yaml +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Onboard workload clusters" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n gloo-mesh get kubernetescluster cluster1 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +mkdir -p ${GITOPS_PLATFORM}/${MGMT}/workspaces +cat < ${GITOPS_PLATFORM}/${MGMT}/workspaces/workspace-global.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: WorkspaceSettings +metadata: + name: global + namespace: gloo-mesh +spec: + options: + eastWestGateways: + - selector: + labels: + istio: eastwestgateway +EOF + +cat <${GITOPS_PLATFORM}/${MGMT}/workspaces/kustomization.yaml +commonAnnotations: + argocd.argoproj.io/sync-wave: "2" +resources: +- workspace-global.yaml +EOF + +cat <>${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml +- workspaces +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Indicate east-west gateway" +git -C ${GITOPS_REPO_LOCAL} push +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GITOPS_GATEWAYS=${GITOPS_REPO_LOCAL}/gateways +mkdir -p ${GITOPS_GATEWAYS} +cat < ${GITOPS_ARGOCD}/gateways.yaml +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + name: gateways + annotations: + argocd.argoproj.io/sync-wave: "-1" + finalizers: + - resources-finalizer.argocd.argoproj.io +spec: + sourceRepos: + - '*' + destinations: + - namespace: '*' + server: '*' + clusterResourceWhitelist: + - group: '*' + kind: '*' +--- +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: gateways +spec: + generators: + - list: + elements: + - cluster: ${MGMT} + - cluster: ${CLUSTER1} + - cluster: ${CLUSTER2} + template: + metadata: + name: gateways-{{cluster}} + finalizers: + - resources-finalizer.argocd.argoproj.io/background + spec: + project: gateways + source: + repoURL: ${GITEA_HTTP}/gloo-gitops/gitops-repo.git + targetRevision: HEAD + path: gateways/{{cluster}} + destination: + name: '{{cluster}}' + namespace: gloo-mesh + syncPolicy: + automated: + allowEmpty: true + prune: true + syncOptions: + - ApplyOutOfSyncOnly=true +EOF +mkdir -p ${GITOPS_GATEWAYS}/base/gateway-services + +cat < ${GITOPS_GATEWAYS}/base/gateway-services/ns.yaml +apiVersion: v1 +kind: Namespace +metadata: + name: istio-gateways + labels: + istio.io/rev: 1-20 +EOF + +cat <${GITOPS_GATEWAYS}/base/gateway-services/kustomization.yaml +commonAnnotations: + argocd.argoproj.io/sync-wave: "3" +resources: +- ns.yaml +EOF + +cat < ${GITOPS_GATEWAYS}/base/gateway-services/ingress.yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: istio-ingressgateway + istio: ingressgateway + name: istio-ingressgateway + namespace: istio-gateways +spec: + ports: + - name: http2 + port: 80 + protocol: TCP + targetPort: 8080 + - name: https + port: 443 + protocol: TCP + targetPort: 8443 + selector: + app: istio-ingressgateway + istio: ingressgateway + revision: 1-20 + type: LoadBalancer +EOF + +cat < ${GITOPS_GATEWAYS}/base/gateway-services/east-west.yaml +apiVersion: v1 +kind: Service +metadata: + labels: + app: istio-ingressgateway + istio: eastwestgateway + topology.istio.io/network: cluster1 + name: istio-eastwestgateway + namespace: istio-gateways +spec: + ports: + - name: status-port + port: 15021 + protocol: TCP + targetPort: 15021 + - name: tls + port: 15443 + protocol: TCP + targetPort: 15443 + - name: https + port: 16443 + protocol: TCP + targetPort: 16443 + - name: tls-spire + port: 8081 + protocol: TCP + targetPort: 8081 + - name: tls-otel + port: 4317 + protocol: TCP + targetPort: 4317 + - name: grpc-cacert + port: 31338 + protocol: TCP + targetPort: 31338 + - name: grpc-ew-bootstrap + port: 31339 + protocol: TCP + targetPort: 31339 + - name: tcp-istiod + port: 15012 + protocol: TCP + targetPort: 15012 + - name: tcp-webhook + port: 15017 + protocol: TCP + targetPort: 15017 + selector: + app: istio-ingressgateway + istio: eastwestgateway + revision: 1-20 + topology.istio.io/network: cluster1 + type: LoadBalancer +EOF + +cat <>${GITOPS_GATEWAYS}/base/gateway-services/kustomization.yaml +- ingress.yaml +- east-west.yaml +EOF +mkdir -p ${GITOPS_GATEWAYS}/${CLUSTER1}/services + +cat < ${GITOPS_GATEWAYS}/${CLUSTER1}/services/kustomization.yaml +patches: +- target: + kind: Namespace + name: istio-system + patch: |- + - op: replace + path: /metadata/labels/topology.istio.io~1network + value: cluster1 +- target: + kind: Service + name: istio-eastwestgateway + patch: |- + - op: replace + path: /metadata/labels/topology.istio.io~1network + value: cluster1 + - op: replace + path: /spec/selector/topology.istio.io~1network + value: cluster1 +resources: +- ../../base/gateway-services +EOF + +cat <${GITOPS_GATEWAYS}/${CLUSTER1}/kustomization.yaml +resources: +- services +EOF + +mkdir -p ${GITOPS_GATEWAYS}/${CLUSTER2}/services + +cat < ${GITOPS_GATEWAYS}/${CLUSTER2}/services/kustomization.yaml +patches: +- target: + kind: Namespace + name: istio-system + patch: |- + - op: replace + path: /metadata/labels/topology.istio.io~1network + value: cluster2 +- target: + kind: Service + name: istio-eastwestgateway + patch: |- + - op: replace + path: /metadata/labels/topology.istio.io~1network + value: cluster2 + - op: replace + path: /spec/selector/topology.istio.io~1network + value: cluster2 +resources: +- ../../base/gateway-services +EOF + +cat <${GITOPS_GATEWAYS}/${CLUSTER2}/kustomization.yaml +resources: +- services +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Gateway services" +git -C ${GITOPS_REPO_LOCAL} push +mkdir -p ${GITOPS_PLATFORM}/${MGMT}/istio + +cat < ${GITOPS_PLATFORM}/${MGMT}/istio/ilm-cluster1.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: IstioLifecycleManager +metadata: + name: cluster1-installation + namespace: gloo-mesh +spec: + installations: + - clusters: + - name: cluster1 + defaultRevision: true + revision: 1-20 + istioOperatorSpec: + profile: minimal + hub: us-docker.pkg.dev/gloo-mesh/istio-workshops + tag: 1.20.2-solo + namespace: istio-system + values: + global: + meshID: mesh1 + multiCluster: + clusterName: cluster1 + network: cluster1 + cni: + excludeNamespaces: + - istio-system + - kube-system + logLevel: info + meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_CAPTURE: "true" + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + components: + pilot: + k8s: + env: + - name: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES + value: "false" + - name: PILOT_ENABLE_IP_AUTOALLOCATE + value: "true" + cni: + enabled: true + namespace: kube-system + ingressGateways: + - name: istio-ingressgateway + enabled: false +EOF + +cat < ${GITOPS_PLATFORM}/${MGMT}/istio/ilm-cluster2.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: IstioLifecycleManager +metadata: + name: cluster2-installation + namespace: gloo-mesh +spec: + installations: + - clusters: + - name: cluster2 + defaultRevision: true + revision: 1-20 + istioOperatorSpec: + profile: minimal + hub: us-docker.pkg.dev/gloo-mesh/istio-workshops + tag: 1.20.2-solo + namespace: istio-system + values: + global: + meshID: mesh1 + multiCluster: + clusterName: cluster2 + network: cluster2 + cni: + excludeNamespaces: + - istio-system + - kube-system + logLevel: info + meshConfig: + accessLogFile: /dev/stdout + defaultConfig: + proxyMetadata: + ISTIO_META_DNS_CAPTURE: "true" + ISTIO_META_DNS_AUTO_ALLOCATE: "true" + components: + pilot: + k8s: + env: + - name: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES + value: "false" + - name: PILOT_ENABLE_IP_AUTOALLOCATE + value: "true" + cni: + enabled: true + namespace: kube-system + ingressGateways: + - name: istio-ingressgateway + enabled: false +EOF + +cat <${GITOPS_PLATFORM}/${MGMT}/istio/kustomization.yaml +commonAnnotations: + argocd.argoproj.io/sync-wave: "3" +resources: +- ilm-cluster1.yaml +- ilm-cluster2.yaml +EOF + +cat <>${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml +- istio +EOF +mkdir -p ${GITOPS_GATEWAYS}/${MGMT} + +cat < ${GITOPS_GATEWAYS}/${MGMT}/glm-cluster1.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: GatewayLifecycleManager +metadata: + name: cluster1-ingress + namespace: gloo-mesh +spec: + installations: + - clusters: + - name: cluster1 + activeGateway: false + gatewayRevision: 1-20 + istioOperatorSpec: + profile: empty + hub: us-docker.pkg.dev/gloo-mesh/istio-workshops + tag: 1.20.2-solo + values: + gateways: + istio-ingressgateway: + customService: true + components: + ingressGateways: + - name: istio-ingressgateway + namespace: istio-gateways + enabled: true + label: + istio: ingressgateway +--- +apiVersion: admin.gloo.solo.io/v2 +kind: GatewayLifecycleManager +metadata: + name: cluster1-eastwest + namespace: gloo-mesh +spec: + installations: + - clusters: + - name: cluster1 + activeGateway: false + gatewayRevision: 1-20 + istioOperatorSpec: + profile: empty + hub: us-docker.pkg.dev/gloo-mesh/istio-workshops + tag: 1.20.2-solo + values: + gateways: + istio-ingressgateway: + customService: true + components: + ingressGateways: + - name: istio-eastwestgateway + namespace: istio-gateways + enabled: true + label: + istio: eastwestgateway + topology.istio.io/network: cluster1 + k8s: + env: + - name: ISTIO_META_ROUTER_MODE + value: "sni-dnat" + - name: ISTIO_META_REQUESTED_NETWORK_VIEW + value: cluster1 +EOF + +cat < ${GITOPS_GATEWAYS}/${MGMT}/glm-cluster2.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: GatewayLifecycleManager +metadata: + name: cluster2-ingress + namespace: gloo-mesh +spec: + installations: + - clusters: + - name: cluster2 + activeGateway: false + gatewayRevision: 1-20 + istioOperatorSpec: + profile: empty + hub: us-docker.pkg.dev/gloo-mesh/istio-workshops + tag: 1.20.2-solo + values: + gateways: + istio-ingressgateway: + customService: true + components: + ingressGateways: + - name: istio-ingressgateway + namespace: istio-gateways + enabled: true + label: + istio: ingressgateway +--- +apiVersion: admin.gloo.solo.io/v2 +kind: GatewayLifecycleManager +metadata: + name: cluster2-eastwest + namespace: gloo-mesh +spec: + installations: + - clusters: + - name: cluster2 + activeGateway: false + gatewayRevision: 1-20 + istioOperatorSpec: + profile: empty + hub: us-docker.pkg.dev/gloo-mesh/istio-workshops + tag: 1.20.2-solo + values: + gateways: + istio-ingressgateway: + customService: true + components: + ingressGateways: + - name: istio-eastwestgateway + namespace: istio-gateways + enabled: true + label: + istio: eastwestgateway + topology.istio.io/network: cluster2 + k8s: + env: + - name: ISTIO_META_ROUTER_MODE + value: "sni-dnat" + - name: ISTIO_META_REQUESTED_NETWORK_VIEW + value: cluster2 +EOF + +cat <>${GITOPS_GATEWAYS}/${MGMT}/kustomization.yaml +resources: +- glm-cluster2.yaml +- glm-cluster1.yaml +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Istio and gateway lifecycle managers" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n gloo-mesh get ilm cluster1-installation 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +until kubectl --context ${MGMT} -n gloo-mesh wait --timeout=180s --for=jsonpath='{.status.clusters.cluster1.installations.*.state}'=HEALTHY istiolifecyclemanagers/cluster1-installation; do + echo "Waiting for the Istio installation to complete" + sleep 1 +done +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-system get deploy -o json | jq '[.items[].status.readyReplicas] | add') -ge 1 ]]; do + sleep 1 +done" +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 2 ]]; do + sleep 1 +done" +until kubectl --context ${MGMT} -n gloo-mesh wait --timeout=180s --for=jsonpath='{.status.clusters.cluster2.installations.*.state}'=HEALTHY istiolifecyclemanagers/cluster2-installation; do + echo "Waiting for the Istio installation to complete" + sleep 1 +done +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER2} -n istio-system get deploy -o json | jq '[.items[].status.readyReplicas] | add') -ge 1 ]]; do + sleep 1 +done" +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER2} -n istio-gateways get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 2 ]]; do + sleep 1 +done" +cat <<'EOF' > ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GITOPS_BOOKINFO=${GITOPS_REPO_LOCAL}/bookinfo +mkdir -p ${GITOPS_BOOKINFO} +cat < ${GITOPS_ARGOCD}/bookinfo.yaml +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + name: bookinfo + annotations: + argocd.argoproj.io/sync-wave: "-1" + finalizers: + - resources-finalizer.argocd.argoproj.io +spec: + sourceRepos: + - '*' + destinations: + - namespace: '*' + server: '*' + clusterResourceWhitelist: + - group: '*' + kind: '*' +--- +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: bookinfo +spec: + generators: + - list: + elements: + - cluster: ${CLUSTER1} + - cluster: ${CLUSTER2} + template: + metadata: + name: bookinfo-{{cluster}} + finalizers: + - resources-finalizer.argocd.argoproj.io + spec: + project: bookinfo + source: + repoURL: ${GITEA_HTTP}/gloo-gitops/gitops-repo.git + targetRevision: HEAD + path: bookinfo/{{cluster}} + destination: + name: '{{cluster}}' + namespace: default + syncPolicy: + automated: + allowEmpty: true + prune: true + syncOptions: + - ApplyOutOfSyncOnly=true +EOF +mkdir -p ${GITOPS_BOOKINFO}/base/frontends +cp data/steps/deploy-bookinfo/productpage-v1.yaml ${GITOPS_BOOKINFO}/base/frontends/ + +mkdir -p ${GITOPS_BOOKINFO}/base/backends +cp data/steps/deploy-bookinfo/details-v1.yaml data/steps/deploy-bookinfo/ratings-v1.yaml data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + ${GITOPS_BOOKINFO}/base/backends/ +cat <${GITOPS_BOOKINFO}/base/frontends/ns.yaml +apiVersion: v1 +kind: Namespace +metadata: + name: bookinfo-frontends + labels: + istio.io/rev: 1-20 +EOF + +cat <${GITOPS_BOOKINFO}/base/backends/ns.yaml +apiVersion: v1 +kind: Namespace +metadata: + name: bookinfo-backends + labels: + istio.io/rev: 1-20 +EOF +cat <${GITOPS_BOOKINFO}/base/frontends/kustomization.yaml +resources: +- ns.yaml +- productpage-v1.yaml +EOF + +cat <${GITOPS_BOOKINFO}/base/backends/kustomization.yaml +resources: +- ns.yaml +- details-v1.yaml +- ratings-v1.yaml +- reviews-v1-v2.yaml +EOF +mkdir -p ${GITOPS_BOOKINFO}/${CLUSTER1}/frontends ${GITOPS_BOOKINFO}/${CLUSTER1}/backends + +cat <${GITOPS_BOOKINFO}/${CLUSTER1}/frontends/kustomization.yaml +namespace: bookinfo-frontends +resources: +- ../../base/frontends +EOF + +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/backends/kustomization.yaml +namespace: bookinfo-backends +patches: +- target: + kind: Deployment + name: reviews-v1 + patch: |- + - op: add + path: /spec/template/spec/containers/0/env/- + value: + name: CLUSTER_NAME + value: ${CLUSTER1} +- target: + kind: Deployment + name: reviews-v2 + patch: |- + - op: add + path: /spec/template/spec/containers/0/env/- + value: + name: CLUSTER_NAME + value: ${CLUSTER1} +resources: +- ../../base/backends +EOF + +cat <${GITOPS_BOOKINFO}/${CLUSTER1}/kustomization.yaml +resources: +- frontends +- backends +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Bookinfo on ${CLUSTER1}" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +cp -r ${GITOPS_BOOKINFO}/${CLUSTER1} ${GITOPS_BOOKINFO}/${CLUSTER2} +cat < ${GITOPS_BOOKINFO}/${CLUSTER2}/backends/kustomization.yaml +namespace: bookinfo-backends +patches: +- target: + kind: Deployment + name: reviews-v1 + patch: |- + - op: add + path: /spec/template/spec/containers/0/env/- + value: + name: CLUSTER_NAME + value: ${CLUSTER2} +- target: + kind: Deployment + name: reviews-v2 + patch: |- + - op: add + path: /spec/template/spec/containers/0/env/- + value: + name: CLUSTER_NAME + value: ${CLUSTER2} +- target: + kind: Deployment + name: reviews-v3 + patch: |- + - op: add + path: /spec/template/spec/containers/0/env/- + value: + name: CLUSTER_NAME + value: ${CLUSTER2} +resources: +- ../../base/backends +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Bookinfo on ${CLUSTER2}" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +git -C ${GITOPS_REPO_LOCAL} checkout -b reviews-v3 +cp data/steps/deploy-bookinfo/reviews-v3.yaml ${GITOPS_BOOKINFO}/${CLUSTER2}/backends/reviews-v3.yaml +cat <>${GITOPS_BOOKINFO}/${CLUSTER2}/backends/kustomization.yaml +- reviews-v3.yaml +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "v3 of reviews service" +git -C ${GITOPS_REPO_LOCAL} push -u origin reviews-v3 +git -C ${GITOPS_REPO_LOCAL} checkout main +{ PR_ID=$(curl -Ss ${GITEA_HTTP}/api/v1/repos/gloo-gitops/gitops-repo/pulls \ + -H "accept: application/json" -H "Content-Type: application/json" \ + -H "Authorization: token ${GITEA_ADMIN_TOKEN}" \ + -d '{ + "title": "Add v3 of bookinfo reviews", + "base": "main", + "head": "reviews-v3" + }' | tee /dev/fd/3 | jq '.id'); } 3>&1 +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +curl -i ${GITEA_HTTP}/api/v1/repos/gloo-gitops/gitops-repo/pulls/${PR_ID}/merge \ + --fail-with-body \ + -H "accept: application/json" -H "Content-Type: application/json" \ + -H "Authorization: token ${GITEA_ADMIN_TOKEN}" \ + -d '{ "do": "merge" }' +until [[ $? -eq 0 ]]; do + attempt=$((attempt+1)) + sleep 2 + echo "Retrying merge command ($attempt)..." + if [[ $attempt -lt 5 ]]; then + curl -i ${GITEA_HTTP}/api/v1/repos/gloo-gitops/gitops-repo/pulls/${PR_ID}/merge \ + --fail-with-body \ + -H "accept: application/json" -H "Content-Type: application/json" \ + -H "Authorization: token ${GITEA_ADMIN_TOKEN}" \ + -d '{ "do": "merge" }' + fi +done +sleep 2 +git -C ${GITOPS_REPO_LOCAL} checkout main +git -C ${GITOPS_REPO_LOCAL} fetch +git -C ${GITOPS_REPO_LOCAL} pull +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GITOPS_HTTPBIN=${GITOPS_REPO_LOCAL}/httpbin +mkdir -p ${GITOPS_HTTPBIN} +cat < ${GITOPS_ARGOCD}/httpbin.yaml +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + name: httpbin + annotations: + argocd.argoproj.io/sync-wave: "-1" + finalizers: + - resources-finalizer.argocd.argoproj.io +spec: + sourceRepos: + - '*' + destinations: + - namespace: '*' + server: '*' + clusterResourceWhitelist: + - group: '*' + kind: '*' +--- +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: httpbin +spec: + generators: + - list: + elements: + - cluster: ${CLUSTER1} + template: + metadata: + name: httpbin-{{cluster}} + finalizers: + - resources-finalizer.argocd.argoproj.io + spec: + project: httpbin + source: + repoURL: ${GITEA_HTTP}/gloo-gitops/gitops-repo.git + targetRevision: HEAD + path: httpbin/{{cluster}} + destination: + name: '{{cluster}}' + namespace: default + syncPolicy: + automated: + allowEmpty: true + prune: true + syncOptions: + - ApplyOutOfSyncOnly=true +EOF +mkdir -p ${GITOPS_HTTPBIN}/base + +cat <${GITOPS_HTTPBIN}/base/ns.yaml +apiVersion: v1 +kind: Namespace +metadata: + name: httpbin +EOF + +cat < ${GITOPS_HTTPBIN}/base/not-in-mesh.yaml + +apiVersion: v1 +kind: ServiceAccount +metadata: + name: not-in-mesh + namespace: httpbin +--- +apiVersion: v1 +kind: Service +metadata: + name: not-in-mesh + namespace: httpbin + labels: + app: not-in-mesh + service: not-in-mesh +spec: + ports: + - name: http + port: 8000 + targetPort: 80 + selector: + app: not-in-mesh +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: not-in-mesh + namespace: httpbin +spec: + replicas: 1 + selector: + matchLabels: + app: not-in-mesh + version: v1 + template: + metadata: + labels: + app: not-in-mesh + version: v1 + spec: + serviceAccountName: not-in-mesh + containers: + - image: docker.io/kennethreitz/httpbin + imagePullPolicy: IfNotPresent + name: not-in-mesh + ports: + - name: http + containerPort: 80 + livenessProbe: + httpGet: + path: /status/200 + port: http + readinessProbe: + httpGet: + path: /status/200 + port: http + +EOF +cat < ${GITOPS_HTTPBIN}/base/in-mesh.yaml + +apiVersion: v1 +kind: ServiceAccount +metadata: + name: in-mesh + namespace: httpbin +--- +apiVersion: v1 +kind: Service +metadata: + name: in-mesh + namespace: httpbin + labels: + app: in-mesh + service: in-mesh +spec: + ports: + - name: http + port: 8000 + targetPort: 80 + selector: + app: in-mesh +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: in-mesh + namespace: httpbin +spec: + replicas: 1 + selector: + matchLabels: + app: in-mesh + version: v1 + template: + metadata: + labels: + app: in-mesh + version: v1 + istio.io/rev: 1-20 + spec: + serviceAccountName: in-mesh + containers: + - image: docker.io/kennethreitz/httpbin + imagePullPolicy: IfNotPresent + name: in-mesh + ports: + - name: http + containerPort: 80 + livenessProbe: + httpGet: + path: /status/200 + port: http + readinessProbe: + httpGet: + path: /status/200 + port: http + +EOF +cat <${GITOPS_HTTPBIN}/base/kustomization.yaml +resources: +- ns.yaml +- not-in-mesh.yaml +- in-mesh.yaml +EOF + +mkdir -p ${GITOPS_HTTPBIN}/${CLUSTER1} + +cat <${GITOPS_HTTPBIN}/${CLUSTER1}/kustomization.yaml +namespace: httpbin +resources: +- ../base +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "httpbin on ${CLUSTER1}" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for httpbin pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n httpbin get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 2 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n httpbin get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <${GITOPS_PLATFORM}/${CLUSTER1}/ns-gloo-mesh-addons.yaml +apiVersion: v1 +kind: Namespace +metadata: + name: gloo-mesh-addons + labels: + istio.io/rev: 1-20 +EOF + +cat <>${GITOPS_PLATFORM}/${CLUSTER1}/kustomization.yaml +- ns-gloo-mesh-addons.yaml +EOF + +cp ${GITOPS_PLATFORM}/${CLUSTER1}/ns-gloo-mesh-addons.yaml ${GITOPS_PLATFORM}/${CLUSTER2}/ + +cat <>${GITOPS_PLATFORM}/${CLUSTER2}/kustomization.yaml +- ns-gloo-mesh-addons.yaml +EOF +cat < ${GITOPS_PLATFORM}/argo-cd/gloo-platform-addons-installation.yaml +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: gloo-platform-addons +spec: + generators: + - list: + elements: + - cluster: ${CLUSTER1} + - cluster: ${CLUSTER2} + template: + metadata: + name: gloo-platform-addons-{{cluster}} + annotations: + argocd.argoproj.io/sync-wave: "2" + finalizers: + - resources-finalizer.argocd.argoproj.io/background + spec: + project: platform + destination: + name: '{{cluster}}' + namespace: gloo-mesh-addons + syncPolicy: + automated: + prune: true + ignoreDifferences: + - kind: Secret + name: ext-auth-service-signing-key + jsonPointers: + - /data/signing-key + sources: + - chart: gloo-platform + repoURL: https://storage.googleapis.com/gloo-platform/helm-charts + targetRevision: 2.5.12 + helm: + releaseName: gloo-platform + valueFiles: + - \$values/platform/argo-cd/gloo-platform-addons-installation-values.yaml + parameters: + - name: common.cluster + value: '{{cluster}}' + - repoURL: http://$(kubectl --context ${MGMT} -n gitea get svc gitea-http -o jsonpath='{.status.loadBalancer.ingress[0].*}'):3180/gloo-gitops/gitops-repo.git + targetRevision: HEAD + ref: values +EOF +cat < ${GITOPS_PLATFORM}/argo-cd/gloo-platform-addons-installation-values.yaml +common: + cluster: undefined +glooAgent: + enabled: false +extAuthService: + enabled: true + extAuth: + apiKeyStorage: + name: redis + enabled: true + config: + connection: + host: redis.gloo-mesh-addons:6379 + secretKey: ThisIsSecret +rateLimiter: + enabled: true +EOF +cat <>${GITOPS_PLATFORM}/argo-cd/kustomization.yaml +- gloo-platform-addons-installation.yaml +EOF +cat < ${GITOPS_PLATFORM}/${CLUSTER1}/ext-auth-server.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: ExtAuthServer +metadata: + name: ext-auth-server + namespace: gloo-mesh-addons +spec: + destinationServer: + ref: + cluster: cluster1 + name: ext-auth-service + namespace: gloo-mesh-addons + port: + name: grpc + requestBody: {} # Needed if some an extauth plugin must access the body of the requests +EOF +cat < ${GITOPS_PLATFORM}/${CLUSTER1}/rate-limit-server-settings.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: RateLimitServerSettings +metadata: + name: rate-limit-server + namespace: gloo-mesh-addons +spec: + destinationServer: + ref: + cluster: cluster1 + name: rate-limiter + namespace: gloo-mesh-addons + port: + name: grpc +EOF +cat <>${GITOPS_PLATFORM}/${CLUSTER1}/kustomization.yaml +- ext-auth-server.yaml +- rate-limit-server-settings.yaml +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Gloo Platform add-ons" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n gloo-mesh-addons get eas ext-auth-server 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 deployment", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); +describe("Gloo Platform add-ons cluster2 deployment", () => { + let cluster = process.env.CLUSTER2 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-deployments.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 service", () => { + let cluster = process.env.CLUSTER1 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); +describe("Gloo Platform add-ons cluster2 service", () => { + let cluster = process.env.CLUSTER2 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-services.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +mkdir -p ${GITOPS_PLATFORM}/${MGMT}/workspaces +cat < ${GITOPS_PLATFORM}/${MGMT}/workspaces/gateways.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: Workspace +metadata: + name: gateways + namespace: gloo-mesh +spec: + workloadClusters: + - name: cluster1 + namespaces: + - name: istio-gateways + - name: gloo-mesh-addons + - name: cluster2 + namespaces: + - name: istio-gateways + - name: gloo-mesh-addons +EOF +cat < ${GITOPS_GATEWAYS}/${CLUSTER1}/workspace-settings.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: WorkspaceSettings +metadata: + name: gateways + namespace: gloo-mesh-addons +spec: + importFrom: + - workspaces: + - selector: + allow_ingress: "true" + resources: + - kind: SERVICE + - kind: ALL + labels: + expose: "true" + exportTo: + - workspaces: + - selector: + allow_ingress: "true" + resources: + - kind: SERVICE +EOF +if [ ! -f ${GITOPS_PLATFORM}/${MGMT}/workspaces/kustomization.yaml ]; then + cat <${GITOPS_PLATFORM}/${MGMT}/workspaces/kustomization.yaml +resources: +EOF +fi + +cat <>${GITOPS_PLATFORM}/${MGMT}/workspaces/kustomization.yaml +- gateways.yaml +EOF + +if [ $(yq 'contains({"resources": ["workspaces"]})' ${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml) = false ]; then + cat <>${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml +- workspaces +EOF +fi + +cat <>${GITOPS_GATEWAYS}/${CLUSTER1}/kustomization.yaml +- workspace-settings.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Gateways workspace" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n gloo-mesh get workspace gateways 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +mkdir -p ${GITOPS_PLATFORM}/${MGMT}/workspaces +cat < ${GITOPS_PLATFORM}/${MGMT}/workspaces/bookinfo.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: Workspace +metadata: + name: bookinfo + namespace: gloo-mesh + labels: + allow_ingress: "true" +spec: + workloadClusters: + - name: cluster1 + namespaces: + - name: bookinfo-frontends + - name: bookinfo-backends + - name: cluster2 + namespaces: + - name: bookinfo-frontends + - name: bookinfo-backends +EOF +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/workspace-settings.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: WorkspaceSettings +metadata: + name: bookinfo + namespace: bookinfo-frontends +spec: + importFrom: + - workspaces: + - name: gateways + resources: + - kind: SERVICE + exportTo: + - workspaces: + - name: gateways + resources: + - kind: SERVICE + labels: + app: productpage + - kind: SERVICE + labels: + app: reviews + - kind: SERVICE + labels: + app: ratings + - kind: ALL + labels: + expose: "true" +EOF +if [ ! -f ${GITOPS_PLATFORM}/${MGMT}/workspaces/kustomization.yaml ]; then + cat <${GITOPS_PLATFORM}/${MGMT}/workspaces/kustomization.yaml +resources: +EOF +fi + +cat <>${GITOPS_PLATFORM}/${MGMT}/workspaces/kustomization.yaml +- bookinfo.yaml +EOF + +if [ $(yq 'contains({"resources": ["workspaces"]})' ${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml) = false ]; then + cat <>${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml +- workspaces +EOF +fi + +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/kustomization.yaml +- workspace-settings.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Bookinfo workspace" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n gloo-mesh get workspace bookinfo 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +cat < ${GITOPS_GATEWAYS}/${CLUSTER1}/virtualgateway.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: VirtualGateway +metadata: + name: north-south-gw + namespace: istio-gateways +spec: + workloads: + - selector: + labels: + istio: ingressgateway + cluster: cluster1 + listeners: + - http: {} + port: + number: 80 + allowedRouteTables: + - host: '*' +EOF +cat < ${GITOPS_GATEWAYS}/${CLUSTER1}/routetable-main.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: RouteTable +metadata: + name: main-bookinfo + namespace: istio-gateways +spec: + hosts: + - cluster1-bookinfo.example.com + - cluster2-bookinfo.example.com + virtualGateways: + - name: north-south-gw + namespace: istio-gateways + cluster: cluster1 + workloadSelectors: [] + http: + - name: root + matchers: + - uri: + prefix: / + delegate: + routeTables: + - labels: + expose: "true" + workspace: bookinfo + - labels: + expose: "true" + workspace: gateways + sortMethod: ROUTE_SPECIFICITY +--- +apiVersion: networking.gloo.solo.io/v2 +kind: RouteTable +metadata: + name: main-httpbin + namespace: istio-gateways +spec: + hosts: + - cluster1-httpbin.example.com + virtualGateways: + - name: north-south-gw + namespace: istio-gateways + cluster: cluster1 + workloadSelectors: [] + http: + - name: root + matchers: + - uri: + prefix: / + delegate: + routeTables: + - labels: + expose: "true" + workspace: httpbin + sortMethod: ROUTE_SPECIFICITY +EOF +cat <>${GITOPS_GATEWAYS}/${CLUSTER1}/kustomization.yaml +- virtualgateway.yaml +- routetable-main.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Virtual gateway and main route table" +git -C ${GITOPS_REPO_LOCAL} push +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/routetable-productpage.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: RouteTable +metadata: + name: productpage + namespace: bookinfo-frontends + labels: + expose: "true" +spec: + http: + - name: productpage + matchers: + - uri: + exact: /productpage + - uri: + prefix: /static + - uri: + prefix: /api/v1/products + forwardTo: + destinations: + - ref: + name: productpage + namespace: bookinfo-frontends + cluster: cluster1 + port: + number: 9080 +EOF +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/kustomization.yaml +- routetable-productpage.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Bookinfo route table" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get rt productpage 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +./scripts/register-domain.sh cluster1-bookinfo.example.com ${HOST_GW_CLUSTER1} +./scripts/register-domain.sh cluster1-httpbin.example.com ${HOST_GW_CLUSTER1} +./scripts/register-domain.sh cluster2-bookinfo.example.com ${HOST_GW_CLUSTER2} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +cat <${GITOPS_GATEWAYS}/base/gateway-services/ingress-certs.yaml +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: tls-secret + namespace: istio-gateways +stringData: + tls.crt: | +$(cat tls.crt | sed 's/^/ /') + tls.key: | +$(cat tls.key | sed 's/^/ /') +EOF + +cat <>${GITOPS_GATEWAYS}/base/gateway-services/kustomization.yaml +- ingress-certs.yaml +EOF +cat < ${GITOPS_GATEWAYS}/${CLUSTER1}/virtualgateway.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: VirtualGateway +metadata: + name: north-south-gw + namespace: istio-gateways +spec: + workloads: + - selector: + labels: + istio: ingressgateway + cluster: cluster1 + listeners: + - http: {} + port: + number: 80 +# ---------------- Redirect to https -------------------- + httpsRedirect: true +# ------------------------------------------------------- + - http: {} +# ---------------- SSL config --------------------------- + port: + number: 443 + tls: + parameters: + minimumProtocolVersion: TLSv1_3 + mode: SIMPLE + secretName: tls-secret +# ------------------------------------------------------- + allowedRouteTables: + - host: '*' +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Secure the gateway" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \"\$(kubectl --context ${CLUSTER1} -n istio-gateways get vg north-south-gw -ojsonpath='{.spec.listeners[?(@.tls.mode==\"SIMPLE\")]}' 2>/dev/null)\" != \"\" ]]; do + sleep 1 + echo -n . +done" +echo +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/fault-injection.yaml +apiVersion: resilience.policy.gloo.solo.io/v2 +kind: FaultInjectionPolicy +metadata: + name: ratings-fault-injection + namespace: bookinfo-frontends +spec: + applyToRoutes: + - route: + labels: + fault_injection: "true" + config: + delay: + fixedDelay: 2s + percentage: 100 +EOF +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/routetable-ratings.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: RouteTable +metadata: + name: ratings + namespace: bookinfo-frontends +spec: + hosts: + - 'ratings.bookinfo-backends.svc.cluster.local' + workloadSelectors: + - selector: + labels: + app: reviews + http: + - name: ratings + labels: + fault_injection: "true" + matchers: + - uri: + prefix: / + forwardTo: + destinations: + - ref: + name: ratings + namespace: bookinfo-backends + port: + number: 9080 +EOF +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/kustomization.yaml +- fault-injection.yaml +- routetable-ratings.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Ratings fault injection" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get rt ratings 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/retry-timeout.yaml +apiVersion: resilience.policy.gloo.solo.io/v2 +kind: RetryTimeoutPolicy +metadata: + name: reviews-request-timeout + namespace: bookinfo-frontends +spec: + applyToRoutes: + - route: + labels: + request_timeout: "0.5s" + config: + requestTimeout: 0.5s +EOF +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/routetable-reviews.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: RouteTable +metadata: + name: reviews + namespace: bookinfo-frontends +spec: + hosts: + - 'reviews.bookinfo-backends.svc.cluster.local' + workloadSelectors: + - selector: + labels: + app: productpage + http: + - name: reviews + labels: + request_timeout: "0.5s" + matchers: + - uri: + prefix: / + forwardTo: + destinations: + - ref: + name: reviews + namespace: bookinfo-backends + port: + number: 9080 + subset: + version: v2 +EOF +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/kustomization.yaml +- retry-timeout.yaml +- routetable-reviews.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Reviews timeout retry" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get rt reviews 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +git -C ${GITOPS_REPO_LOCAL} revert --no-commit HEAD~2.. +git -C ${GITOPS_REPO_LOCAL} commit -m "Revert traffic policies" +git -C ${GITOPS_REPO_LOCAL} push +cat < ${GITOPS_PLATFORM}/${MGMT}/root-trust.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: RootTrustPolicy +metadata: + name: root-trust-policy + namespace: gloo-mesh +spec: + config: + mgmtServerCa: + generated: {} +EOF +cat <>${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml +- root-trust.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Root trust policy" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n gloo-mesh get rtp root-trust-policy 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +until [[ $(kubectl --context ${MGMT} -n gloo-mesh get rtp root-trust-policy 2>/dev/null) ]]; do sleep 1; done + +bash ./data/steps/root-trust-policy/restart-istio-pods.sh ${CLUSTER1} +bash ./data/steps/root-trust-policy/restart-istio-pods.sh ${CLUSTER2} +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("cacerts secrets have been created", () => { + const clusters = [process.env.CLUSTER1, process.env.CLUSTER2]; + clusters.forEach(cluster => { + it('Secret is present in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "istio-system", k8sType: "secret", k8sObj: "cacerts" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/cacert-secrets-created.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +printf "Waiting for all pods needed for the test..." +printf "\n" +kubectl --context ${CLUSTER1} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER1} rollout status -n bookinfo-backends {} +kubectl --context ${CLUSTER2} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER2} rollout status -n bookinfo-backends {} +printf "\n" +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +const testerPodName = "tester-root-trust-policy"; +before(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + done(); +}); +after(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + done(); +}); + +describe("Certificate issued by Gloo Mesh", () => { + var expectedOutput = "i:O=gloo-mesh"; + + it('Gloo mesh is the organization for ' + process.env.CLUSTER1 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER1} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + + + it('Gloo mesh is the organization for ' + process.env.CLUSTER2 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER2} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/certificate-issued-by-gloo-mesh.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/virtualdestination-reviews.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: VirtualDestination +metadata: + name: reviews + namespace: bookinfo-backends +spec: + hosts: + - reviews.global + services: + - namespace: bookinfo-backends + labels: + app: reviews + ports: + - number: 9080 + protocol: HTTP +EOF + +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/kustomization.yaml +- virtualdestination-reviews.yaml +EOF +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Route to reviews using virtual destination" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get vd reviews 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/failover-reviews.yaml +apiVersion: resilience.policy.gloo.solo.io/v2 +kind: FailoverPolicy +metadata: + name: failover + namespace: bookinfo-backends +spec: + applyToDestinations: + - kind: VIRTUAL_DESTINATION + selector: + labels: + failover: "true" + config: + localityMappings: [] +EOF +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/outlierdetection-reviews.yaml +apiVersion: resilience.policy.gloo.solo.io/v2 +kind: OutlierDetectionPolicy +metadata: + name: outlier-detection + namespace: bookinfo-backends +spec: + applyToDestinations: + - kind: VIRTUAL_DESTINATION + selector: + labels: + failover: "true" + config: + consecutiveErrors: 2 + interval: 5s + baseEjectionTime: 30s + maxEjectionPercent: 100 +EOF +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/virtualdestination-reviews.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: VirtualDestination +metadata: + name: reviews + namespace: bookinfo-backends + labels: + failover: "true" +spec: + hosts: + - reviews.global + services: + - namespace: bookinfo-backends + labels: + app: reviews + ports: + - number: 9080 + protocol: HTTP +EOF +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/kustomization.yaml +- failover-reviews.yaml +- outlierdetection-reviews.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Manage reviews traffic with failover" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get failoverpolicy failover 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v1 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v2 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v1 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v2 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +git -C ${GITOPS_REPO_LOCAL} revert --no-commit HEAD~2.. +git -C ${GITOPS_REPO_LOCAL} commit -m "Revert reviews virtual destination routing" +git -C ${GITOPS_REPO_LOCAL} push +(timeout 2s kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) || (kubectl --context ${CLUSTER1} -n httpbin rollout restart deploy/in-mesh && kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/workspace-settings.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: WorkspaceSettings +metadata: + name: bookinfo + namespace: bookinfo-frontends +spec: + importFrom: + - workspaces: + - name: gateways + resources: + - kind: SERVICE + exportTo: + - workspaces: + - name: gateways + resources: + - kind: SERVICE + labels: + app: productpage + - kind: SERVICE + labels: + app: reviews + - kind: SERVICE + labels: + app: ratings + - kind: ALL + labels: + expose: "true" + options: + serviceIsolation: + enabled: true + trimProxyConfig: true +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Enable service isolation" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \"\$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get workspacesettings bookinfo -ojsonpath='{.spec.options.serviceIsolation.enabled}' 2>/dev/null)\" = \"true\" ]]; do + sleep 1 + echo -n . +done" +echo +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/accesspolicy-productpage.yaml +apiVersion: security.policy.gloo.solo.io/v2 +kind: AccessPolicy +metadata: + name: allow-productpage + namespace: bookinfo-frontends +spec: + applyToDestinations: + - selector: + labels: + app: productpage + config: + authz: + allowedClients: + - serviceAccountSelector: + name: istio-ingressgateway-1-20-service-account + namespace: istio-gateways + - serviceAccountSelector: + name: istio-eastwestgateway-1-20-service-account + namespace: istio-gateways +EOF +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/accesspolicy-details-reviews.yaml +apiVersion: security.policy.gloo.solo.io/v2 +kind: AccessPolicy +metadata: + name: allow-details-reviews + namespace: bookinfo-frontends +spec: + applyToDestinations: + - selector: + labels: + app: details + - selector: + labels: + app: reviews + config: + authz: + allowedClients: + - serviceAccountSelector: + name: bookinfo-productpage + allowedMethods: + - GET +EOF +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/accesspolicy-ratings.yaml +apiVersion: security.policy.gloo.solo.io/v2 +kind: AccessPolicy +metadata: + name: allow-ratings + namespace: bookinfo-frontends +spec: + applyToDestinations: + - selector: + labels: + app: ratings + config: + authz: + allowedClients: + - serviceAccountSelector: + name: bookinfo-reviews +EOF +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/kustomization.yaml +- accesspolicy-productpage.yaml +- accesspolicy-details-reviews.yaml +- accesspolicy-ratings.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Access policies" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get accesspolicy allow-productpage 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + + it("Response code shouldn't be 200 accessing ratings", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://ratings.bookinfo-backends:9080/ratings/0', timeout=3); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); + + it("Response code should be 200 accessing reviews with GET", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Response code should be 403 accessing reviews with HEAD", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.head('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); + + it("Response code should be 200 accessing details", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://details.bookinfo-backends:9080/details/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/bookinfo-access.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +git -C ${GITOPS_REPO_LOCAL} revert --no-commit HEAD~2.. +git -C ${GITOPS_REPO_LOCAL} commit -m "Revert zero trust configuration" +git -C ${GITOPS_REPO_LOCAL} push +cat < ${GITOPS_PLATFORM}/argo-cd/kube-prometheus-stack.yaml +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: kube-prometheus-stack + annotations: + argocd.argoproj.io/sync-wave: "0" + finalizers: + - resources-finalizer.argocd.argoproj.io/background +spec: + project: platform + destination: + name: ${MGMT} + namespace: monitoring + syncPolicy: + automated: + allowEmpty: true + prune: true + syncOptions: + - CreateNamespace=true + - ServerSideApply=true + sources: + - chart: kube-prometheus-stack + repoURL: https://prometheus-community.github.io/helm-charts + targetRevision: 55.9.0 + helm: + releaseName: kube-prometheus-stack + valueFiles: + - \$values/platform/argo-cd/kube-prometheus-stack-values.yaml + - repoURL: http://$(kubectl --context ${MGMT} -n gitea get svc gitea-http -o jsonpath='{.status.loadBalancer.ingress[0].*}'):3180/gloo-gitops/gitops-repo.git + targetRevision: HEAD + ref: values +EOF + +cat < ${GITOPS_PLATFORM}/argo-cd/kube-prometheus-stack-values.yaml +prometheus: + service: + type: LoadBalancer + prometheusSpec: + enableRemoteWriteReceiver: true +grafana: + service: + type: LoadBalancer + port: 3000 + additionalDataSources: + - name: prometheus-GM + uid: prometheus-GM + type: prometheus + url: http://prometheus-server.gloo-mesh:80 + grafana.ini: + auth.anonymous: + enabled: true + defaultDashboardsEnabled: false + +EOF + +cat <>${GITOPS_PLATFORM}/argo-cd/kustomization.yaml +- kube-prometheus-stack.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "kube-prometheus-stack" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n argocd get application kube-prometheus-stack 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +timeout 2m bash -c "until [[ \$(kubectl --context ${MGMT} -n monitoring rollout status deploy/kube-prometheus-stack-grafana 2>/dev/null) ]]; do + sleep 1 +done" +if [[ ! $(kubectl --context ${MGMT} -n monitoring rollout status deploy/kube-prometheus-stack-grafana --timeout 10s) ]]; then + echo "kube-prometheus-stack did not deploy" + exit 1 +fi +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("kube-prometheus-stack deployments are ready", () => { + it('kube-prometheus-stack-kube-state-metrics pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-kube-state-metrics" })); + it('kube-prometheus-stack-grafana pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-grafana" })); + it('kube-prometheus-stack-operator pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-operator" })); +}); + +describe("kube-prometheus-stack daemonset is ready", () => { + it('kube-prometheus-stack-prometheus-node-exporter pods are ready', () => helpers.checkDaemonSet({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-prometheus-node-exporter" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/gloo-platform-observability/tests/grafana-installed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +PROD_PROMETHEUS_IP=$(kubectl get svc kube-prometheus-stack-prometheus -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +cat < ${GITOPS_PLATFORM}/argo-cd/gloo-platform-agents-installation-values-gloo.yaml +telemetryCollectorCustomization: + extraProcessors: + filter/gloo: + metrics: + include: + match_type: regexp + metric_names: + - "gloo_mesh_.*" + - "relay_.*" + extraPipelines: + metrics/gloo: + receivers: + - prometheus + processors: + - filter/gloo + - batch + exporters: + - otlp + +EOF +yq -i '(.spec.template.spec.sources[] | select(.chart == "gloo-platform")).helm.valueFiles += ["$values/platform/argo-cd/gloo-platform-agents-installation-values-gloo.yaml"]' \ + ${GITOPS_PLATFORM}/argo-cd/gloo-platform-agents-installation.yaml +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "New Helm values for Gloo metrics" +git -C ${GITOPS_REPO_LOCAL} push +kubectl --context $CLUSTER1 rollout restart daemonset/gloo-telemetry-collector-agent -n gloo-mesh +cat < ${GITOPS_PLATFORM}/${MGMT}/cm-operational-dashboard.yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: operational-dashboard + namespace: monitoring + labels: + grafana_dashboard: "1" +data: + operational-dashboard.json: |- +$(cat data/steps/gloo-platform-observability/operational-dashboard.json | sed -e 's/^/ /;') +EOF + +cat <>${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml +- cm-operational-dashboard.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Gloo Platform operator dashboard" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n monitoring get cm operational-dashboard 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +cat < ${GITOPS_PLATFORM}/argo-cd/gloo-platform-agents-installation-values-istio.yaml +telemetryCollectorCustomization: + extraProcessors: + batch/istiod: + send_batch_size: 10000 + timeout: 10s + filter/istiod: + metrics: + include: + match_type: regexp + metric_names: + - "pilot.*" + - "process.*" + - "go.*" + - "container.*" + - "envoy.*" + - "galley.*" + - "sidecar.*" + # - "istio_build.*" re-enable this after this is fixed upstream + extraExporters: + prometheusremotewrite/production: + endpoint: http://${PROD_PROMETHEUS_IP}:9090/api/v1/write + extraPipelines: + metrics/istiod: + receivers: + - prometheus + processors: + - memory_limiter + - batch/istiod + - filter/istiod + exporters: + - prometheusremotewrite/production + +EOF +yq -i '(.spec.template.spec.sources[] | select(.chart == "gloo-platform")).helm.valueFiles += ["$values/platform/argo-cd/gloo-platform-agents-installation-values-istio.yaml"]' \ + ${GITOPS_PLATFORM}/argo-cd/gloo-platform-agents-installation.yaml +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "New Helm values for Istio metrics" +git -C ${GITOPS_REPO_LOCAL} push +kubectl --context $CLUSTER1 rollout restart daemonset/gloo-telemetry-collector-agent -n gloo-mesh +cat < ${GITOPS_PLATFORM}/${MGMT}/cm-istio-dashboard.yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: istio-control-plane-dashboard + namespace: monitoring + labels: + grafana_dashboard: "1" +data: + istio-control-plane-dashboard.json: |- +$(cat data/steps/gloo-platform-observability/istio-control-plane-dashboard.json | sed -e 's/^/ /;') +EOF + +cat <>${GITOPS_PLATFORM}/${MGMT}/kustomization.yaml +- cm-istio-dashboard.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Istio control plane dashboard" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n monitoring get cm istio-control-plane-dashboard 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +cat < ${GITOPS_PLATFORM}/argo-cd/gloo-platform-agents-installation-values-spire.yaml +glooSpireServer: + enabled: true + controller: + verbose: true + server: + trustDomain: cluster1 +postgresql: + enabled: true + global: + postgresql: + auth: + database: spire + password: gloomesh + username: spire +telemetryCollectorCustomization: + pipelines: + metrics/otlp_relay: + enabled: true +prometheus: + skipAutoMigration: true +EOF + +yq -i '(.spec.template.spec.sources[] | select(.chart == "gloo-platform")).helm.valueFiles += ["$values/platform/argo-cd/gloo-platform-agents-installation-values-spire.yaml"]' \ + ${GITOPS_PLATFORM}/argo-cd/gloo-platform-agents-installation.yaml +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "Enable spire server" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n gloo-mesh get deploy gloo-spire-server 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n istio-system delete secrets cacerts +kubectl --context ${CLUSTER1} -n istio-system delete issuedcertificates,podbouncedirectives --all +kubectl --context ${CLUSTER1} -n gloo-mesh rollout status deploy +bash ./data/steps/root-trust-policy/restart-istio-pods.sh ${CLUSTER1} +kubectl --context ${CLUSTER1} -n gloo-mesh rollout restart deploy gloo-mesh-agent +printf "Waiting for all pods needed for the test..." +printf "\n" +kubectl --context ${CLUSTER1} -n istio-gateways rollout status deploy +kubectl --context ${CLUSTER1} -n gloo-mesh rollout status deploy +printf "\n" +export VM_APP="vm1" +export VM_NAMESPACE="virtualmachines" +export VM_NETWORK="vm-network" +cat <${GITOPS_BOOKINFO}/${CLUSTER1}/ns-virtualmachines.yaml +apiVersion: v1 +kind: Namespace +metadata: + name: ${VM_NAMESPACE} +EOF +cat < ${GITOPS_PLATFORM}/${MGMT}/workspaces/bookinfo.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: Workspace +metadata: + name: bookinfo + namespace: gloo-mesh + labels: + allow_ingress: "true" +spec: + workloadClusters: + - name: cluster1 + namespaces: + - name: bookinfo-frontends + - name: bookinfo-backends + - name: virtualmachines + - name: cluster2 + namespaces: + - name: bookinfo-frontends + - name: bookinfo-backends +EOF +cat < ${GITOPS_PLATFORM}/${MGMT}/workspaces/gateways.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: Workspace +metadata: + name: gateways + namespace: gloo-mesh +spec: + workloadClusters: + - name: cluster1 + namespaces: + - name: istio-gateways + - name: gloo-mesh-addons + - name: gloo-mesh + - name: cluster2 + namespaces: + - name: istio-gateways + - name: gloo-mesh-addons +EOF +docker run -d --name vm1 --network kind --privileged -v `pwd`/vm1:/vm djannot/ubuntu-systemd:22.04 +docker exec vm1 bash -c "sed 's/127.0.0.11/8.8.8.8/' /etc/resolv.conf > /vm/resolv.conf" +docker exec vm1 cp /vm/resolv.conf /etc/resolv.conf +docker exec vm1 apt update -y +docker exec vm1 apt-get install -y iputils-ping curl iproute2 iptables python3 sudo dnsutils +cluster1_cidr=$(kubectl --context ${CLUSTER1} -n kube-system get pod -l component=kube-controller-manager -o jsonpath='{.items[0].spec.containers[0].command}' | jq -r '.[] | select(. | startswith("--cluster-cidr="))' | cut -d= -f2) +cluster2_cidr=$(kubectl --context ${CLUSTER2} -n kube-system get pod -l component=kube-controller-manager -o jsonpath='{.items[0].spec.containers[0].command}' | jq -r '.[] | select(. | startswith("--cluster-cidr="))' | cut -d= -f2) + +docker exec vm1 $(kubectl --context ${CLUSTER1} get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{"'${cluster1_cidr}' via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}') +docker exec vm1 $(kubectl --context ${CLUSTER2} get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{"'${cluster2_cidr}' via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}') +docker cp $HOME/.gloo-mesh/bin/meshctl vm1:/usr/local/bin/ +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/externalworkload.yaml +apiVersion: networking.gloo.solo.io/v2alpha1 +kind: ExternalWorkload +metadata: + name: ${VM_APP} + namespace: virtualmachines + labels: + app: ${VM_APP} +spec: + connectedClusters: + ${CLUSTER1}: virtualmachines + identitySelector: + joinToken: + enable: true + ports: + - name: http-vm + number: 9999 + - name: tcp-db + number: 3306 + protocol: TCP +EOF +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/kustomization.yaml +- ns-virtualmachines.yaml +- externalworkload.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "External workload" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} get ns ${VM_NAMESPACE} 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +uuid_regex_partial="[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" +uuid_regex="^${uuid_regex_partial}$" +start_time=$(date +%s) # Capture start time +duration=120 # Set duration for 2 minutes (120 seconds) +# Loop until JOIN_TOKEN matches the UUID format +while [[ ! "${JOIN_TOKEN}" =~ ${uuid_regex} ]]; do + current_time=$(date +%s) + elapsed=$((current_time - start_time)) + if [[ $elapsed -ge $duration ]]; then + echo "Timeout reached. Exiting loop." + break + fi + + echo "Waiting for JOIN_TOKEN to have the correct format..." + export JOIN_TOKEN=$(meshctl external-workload gen-token --kubecontext ${CLUSTER1} --trust-domain ${CLUSTER1} --ttl 3600 --ext-workload virtualmachines/${VM_APP} --plain=true | grep -ioE "${uuid_regex_partial}") + sleep 1 # Pause for 1 second +done +[[ "${JOIN_TOKEN}" =~ ${uuid_regex} ]] || (echo "JOIN_TOKEN does not match the UUID format." && exit 1) +export EW_GW_ADDR=$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=eastwestgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}') +echo -n Waiting for EW be ready... +timeout -v 1m bash -c " +until nc -z ${EW_GW_ADDR} 31338; +do + sleep 1 + echo -n . +done" +echo +export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.5.12/gloo-workload-agent.deb +export ISTIO_URL=https://storage.googleapis.com/solo-workshops/istio-binaries/1.20.2/istio-sidecar.deb +echo -n Trying to onboard the VM... +MAX_ATTEMPTS=10 +ATTEMPTS=0 +while [ $ATTEMPTS -lt $MAX_ATTEMPTS ]; do + kubectl --context ${CLUSTER1} -n gloo-mesh rollout restart deploy gloo-spire-server + kubectl --context ${CLUSTER1} -n gloo-mesh rollout status deploy gloo-spire-server + sleep 30 + + export JOIN_TOKEN=$(meshctl external-workload gen-token --kubecontext ${CLUSTER1} --trust-domain ${CLUSTER1} --ttl 3600 --ext-workload virtualmachines/${VM_APP} --plain=true | grep -ioE "${uuid_regex_partial}") + timeout 1m docker exec vm1 meshctl ew onboard --install \ + --attestor token \ + --join-token ${JOIN_TOKEN} \ + --cluster ${CLUSTER1} \ + --gateway-addr ${EW_GW_ADDR} \ + --gateway istio-gateways/istio-eastwestgateway-1-20 \ + --trust-domain ${CLUSTER1} \ + --istio-rev 1-20 \ + --network vm-network \ + --gloo ${GLOO_AGENT_URL} \ + --istio ${ISTIO_URL} \ + --ext-workload virtualmachines/${VM_APP} | tee output.log + cat output.log | grep "Onboarding complete!" + if [ $? -eq 0 ]; then + break + fi + ATTEMPTS=$((ATTEMPTS + 1)) + echo "Onboarding failed, retrying... (${ATTEMPTS}/${MAX_ATTEMPTS})" + sleep 2 +done +if [ $ATTEMPTS -eq $MAX_ATTEMPTS ]; then + echo "Onboarding failed after $MAX_ATTEMPTS attempts" + exit 1 +fi +docker exec vm1 curl -v localhost:15000/clusters | grep productpage.bookinfo-frontends.svc.cluster.local +docker exec vm1 curl -I productpage.bookinfo-frontends.svc.cluster.local:9080/productpage +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The VM should be able to access the productpage service", () => { + const command = 'docker exec vm1 curl -s -o /dev/null -w "%{http_code}" productpage.bookinfo-frontends.svc.cluster.local:9080/productpage'; + it("Got the expected status code 200", () => helpers.genericCommand({ command: command, responseContains: "200" })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/vm-access-productpage.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +docker exec -d vm1 python3 -m http.server 9999 +kubectl --context ${CLUSTER1} -n bookinfo-frontends exec $(kubectl --context ${CLUSTER1} -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -- python -c "import requests; r = requests.get('http://${VM_APP}.virtualmachines.ext.cluster.local:9999'); print(r.text)" +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should be able to access the VM", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://" + process.env.VM_APP + ".virtualmachines.ext.cluster.local:9999'); print(r.status_code)\""; + it('Got the expected status code 200', () => helpers.genericCommand({ command: command, responseContains: "200" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/productpage-access-vm.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +docker exec vm1 apt-get update +docker exec vm1 apt-get install -y mariadb-server +docker exec vm1 sed -i '/bind-address/c\bind-address = 0.0.0.0' /etc/mysql/mariadb.conf.d/50-server.cnf +docker exec vm1 systemctl start mysql + +docker exec -i vm1 mysql <>${GITOPS_BOOKINFO}/${CLUSTER1}/backends/kustomization.yaml +- ratings-v2-mysql-vm.yaml +EOF + +yq -i '. |= ({"replicas":[{"name":"ratings-v1","count":0}]}) + .' ${GITOPS_BOOKINFO}/${CLUSTER1}/backends/kustomization.yaml + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "New ratings version with external database" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy ratings-v2-mysql-vm 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=delete pod -l app=ratings,version=v1 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-http'); + +describe("The ratings service should use the database running on the VM", () => { + it('Got reviews v2 with ratings in cluster1', () => helpers.checkBody({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', body: 'text-black', match: true })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/ratings-using-vm.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +git -C ${GITOPS_REPO_LOCAL} revert --no-commit HEAD~2.. +git -C ${GITOPS_REPO_LOCAL} commit -m "Revert external workload" +git -C ${GITOPS_REPO_LOCAL} push +docker rm -f vm1 +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat < ${GITOPS_GATEWAYS}/${MGMT}/glm-cluster1-egress.yaml +apiVersion: admin.gloo.solo.io/v2 +kind: GatewayLifecycleManager +metadata: + name: cluster1-egress + namespace: gloo-mesh +spec: + installations: + - clusters: + - name: cluster1 + activeGateway: false + gatewayRevision: 1-20 + istioOperatorSpec: + profile: empty + hub: us-docker.pkg.dev/gloo-mesh/istio-workshops + tag: 1.20.2-solo + components: + egressGateways: + - enabled: true + label: + istio: egressgateway + name: istio-egressgateway + namespace: istio-gateways +EOF + +cat <>${GITOPS_GATEWAYS}/${MGMT}/kustomization.yaml +- glm-cluster1-egress.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "cluster1 egress gateway lifecycle manager" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${MGMT} -n gloo-mesh get glm cluster1-egress 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +ATTEMPTS=1 +until [[ $(kubectl --context $CLUSTER1 -n istio-gateways get deploy -l istio=egressgateway -o json | jq '[.items[].status.readyReplicas] | add') -ge 1 ]] || [ $ATTEMPTS -gt 120 ]; do + printf "." + ATTEMPTS=$((ATTEMPTS + 1)) + sleep 1 +done +cat < ${GITOPS_GATEWAYS}/${CLUSTER1}/virtualgateway-egress.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: VirtualGateway +metadata: + name: egress-gw + namespace: istio-gateways +spec: + listeners: + - exposedExternalServices: + - host: httpbin.org + appProtocol: HTTPS + port: + number: 443 + tls: + mode: ISTIO_MUTUAL + http: {} + workloads: + - selector: + labels: + app: istio-egressgateway + istio: egressgateway +EOF +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/frontends/networkpolicy.yaml +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: restrict-egress + namespace: bookinfo-frontends +spec: + podSelector: {} + policyTypes: + - Egress + egress: + - to: + - namespaceSelector: + matchLabels: {} + podSelector: + matchLabels: {} + - to: + - ipBlock: + cidr: $(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=eastwestgateway -o jsonpath='{.items[].status.loadBalancer.ingress[0].*}')/32 + ports: + - protocol: TCP + port: 15443 + endPort: 15443 +EOF +cat <>${GITOPS_GATEWAYS}/${CLUSTER1}/kustomization.yaml +- virtualgateway-egress.yaml +EOF + +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/frontends/kustomization.yaml +- networkpolicy.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "cluster1 egress VirtualGateway and network policy" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get netpol restrict-egress 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication not allowed", () => { + it("Productpage can NOT send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get', timeout=5); print(r.text)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("User-Agent"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat < ${GITOPS_BOOKINFO}/${CLUSTER1}/frontends/externalservice.yaml +apiVersion: networking.gloo.solo.io/v2 +kind: ExternalService +metadata: + name: httpbin + namespace: bookinfo-frontends + labels: + expose: 'true' +spec: + hosts: + - httpbin.org + ports: + - clientsideTls: {} + egressGatewayRoutes: + portMatch: 80 + virtualGatewayRefs: + - cluster: cluster1 + name: egress-gw + namespace: istio-gateways + name: https + number: 443 + protocol: HTTPS +EOF + +cat <>${GITOPS_BOOKINFO}/${CLUSTER1}/frontends/kustomization.yaml +- externalservice.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "httpbin external service" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get externalservice httpbin 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n bookinfo-frontends exec $(kubectl --context ${CLUSTER1} -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -- python -c "import requests; r = requests.get('http://httpbin.org/get'); print(r.text)" +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat < ${GITOPS_GATEWAYS}/${CLUSTER1}/accesspolicy-allow-get-httpbin.yaml +apiVersion: security.policy.gloo.solo.io/v2 +kind: AccessPolicy +metadata: + name: allow-get-httpbin + namespace: istio-gateways +spec: + applyToDestinations: + - kind: EXTERNAL_SERVICE + selector: + name: httpbin + namespace: bookinfo-frontends + cluster: cluster1 + config: + authz: + allowedClients: + - serviceAccountSelector: + name: bookinfo-productpage + allowedMethods: + - GET + enforcementLayers: + mesh: true + cni: false +EOF + +cat <>${GITOPS_GATEWAYS}/${CLUSTER1}/kustomization.yaml +- accesspolicy-allow-get-httpbin.yaml +EOF + +git -C ${GITOPS_REPO_LOCAL} add . +git -C ${GITOPS_REPO_LOCAL} commit -m "httpbin access policy" +git -C ${GITOPS_REPO_LOCAL} push +echo -n Waiting for Argo CD to sync... +timeout -v 5m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get accesspolicy allow-get-httpbin 2>/dev/null) ]]; do + sleep 1 + echo -n . +done" +echo +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send GET requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Productpage can't send POST requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.post('http://httpbin.org/post'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-only-get-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +git -C ${GITOPS_REPO_LOCAL} revert --no-commit HEAD~4.. +git -C ${GITOPS_REPO_LOCAL} commit -m "Revert egress resources" +git -C ${GITOPS_REPO_LOCAL} push diff --git a/gloo-mesh/enterprise/2-5/gitops/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/enterprise/2-5/gitops/default/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/enterprise/2-5/gitops/default/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/enterprise/2-5/gitops/default/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/enterprise/2-5/gitops/default/scripts/register-domain.sh b/gloo-mesh/enterprise/2-5/gitops/default/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/enterprise/2-5/gitops/default/scripts/register-domain.sh +++ b/gloo-mesh/enterprise/2-5/gitops/default/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/enterprise/2-5/gitops/default/tests/chai-exec.js b/gloo-mesh/enterprise/2-5/gitops/default/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/enterprise/2-5/gitops/default/tests/chai-exec.js +++ b/gloo-mesh/enterprise/2-5/gitops/default/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/enterprise/2-5/gitops/default/tests/chai-http.js b/gloo-mesh/enterprise/2-5/gitops/default/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/enterprise/2-5/gitops/default/tests/chai-http.js +++ b/gloo-mesh/enterprise/2-5/gitops/default/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/enterprise/2-5/gitops/default/tests/proxies-changes.test.js.liquid b/gloo-mesh/enterprise/2-5/gitops/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/enterprise/2-5/gitops/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/README.md b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/README.md index 0c1555da29..4096dbacf6 100644 --- a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/README.md +++ b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/README.md @@ -15,7 +15,7 @@ source ./scripts/assert.sh ## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) * [Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-3---deploy-istio-using-gloo-mesh-lifecycle-manager-) * [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) @@ -67,7 +67,7 @@ You can find more information about Gloo Mesh Enterprise in the official documen -## Lab 1 - Deploy KinD clusters +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -80,13 +80,12 @@ export CLUSTER1=cluster1 export CLUSTER2=cluster2 ``` -Run the following commands to deploy two Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-aws-with-calico.sh 1 cluster1 us-west us-west-1 -./scripts/deploy-aws-with-calico.sh 2 cluster2 us-west us-west-2 +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -96,45 +95,14 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: - -``` -CURRENT NAME CLUSTER AUTHINFO NAMESPACE - cluster1 kind-cluster1 cluster1 -* cluster2 kind-cluster2 cluster2 -``` - -Run the following command to make `cluster1` the current cluster. - -```bash -kubectl config use-context ${MGMT} -``` +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -434,6 +404,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") diff --git a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..1c6e42eb5e --- /dev/null +++ b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,292 @@ +#!/usr/bin/env bash +set -o errexit + +number="1" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=cluster1 +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GLOO_MESH_VERSION=v2.5.12 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.5.12 + +helm upgrade --install gloo-platform-mgmt gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.5.12 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 + +helm upgrade --install gloo-platform-agent gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-20 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml + +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 +kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml + +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-20 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions +kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + -f data/steps/deploy-bookinfo/reviews-v3.yaml +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} + +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n httpbin get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER1} label namespace gloo-mesh-addons istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER2} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER2} label namespace gloo-mesh-addons istio.io/rev=1-20 --overwrite +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh-addons \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 \ + -f -< ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 deployment", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); +describe("Gloo Platform add-ons cluster2 deployment", () => { + let cluster = process.env.CLUSTER2 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-deployments.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 service", () => { + let cluster = process.env.CLUSTER1 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); +describe("Gloo Platform add-ons cluster2 service", () => { + let cluster = process.env.CLUSTER2 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-services.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const chaiHttp = require("chai-http"); +chai.use(chaiHttp); + +process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +let searchTest="Sorry, product reviews are currently unavailable for this book."; + +describe("Reviews shouldn't be available", () => { + it("Checking text '" + searchTest + "' in cluster1", async () => { + await chai.request(`https://cluster1-bookinfo.example.com`) + .get('/productpage') + .send() + .then((res) => { + expect(res.text).to.contain(searchTest); + }); + }); + +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/traffic-policies/tests/traffic-policies-reviews-unavailable.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete faultinjectionpolicy ratings-fault-injection +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable ratings +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete retrytimeoutpolicy reviews-request-timeout +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable reviews +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("cacerts secrets have been created", () => { + const clusters = [process.env.CLUSTER1, process.env.CLUSTER2]; + clusters.forEach(cluster => { + it('Secret is present in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "istio-system", k8sType: "secret", k8sObj: "cacerts" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/cacert-secrets-created.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +printf "Waiting for all pods needed for the test..." +printf "\n" +kubectl --context ${CLUSTER1} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER1} rollout status -n bookinfo-backends {} +kubectl --context ${CLUSTER2} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER2} rollout status -n bookinfo-backends {} +printf "\n" +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +const testerPodName = "tester-root-trust-policy"; +before(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + done(); +}); +after(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + done(); +}); + +describe("Certificate issued by Gloo Mesh", () => { + var expectedOutput = "i:O=gloo-mesh"; + + it('Gloo mesh is the organization for ' + process.env.CLUSTER1 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER1} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + + + it('Gloo mesh is the organization for ' + process.env.CLUSTER2 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER2} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/certificate-issued-by-gloo-mesh.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v1 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v2 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v1 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v2 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualdestination reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete failoverpolicy failover +kubectl --context ${CLUSTER1} -n bookinfo-backends delete outlierdetectionpolicy outlier-detection +(timeout 2s kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) || (kubectl --context ${CLUSTER1} -n httpbin rollout restart deploy/in-mesh && kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + + it("Response code shouldn't be 200 accessing ratings", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://ratings.bookinfo-backends:9080/ratings/0', timeout=3); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); + + it("Response code should be 200 accessing reviews with GET", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Response code should be 403 accessing reviews with HEAD", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.head('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); + + it("Response code should be 200 accessing details", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://details.bookinfo-backends:9080/details/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/bookinfo-access.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("kube-prometheus-stack deployments are ready", () => { + it('kube-prometheus-stack-kube-state-metrics pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-kube-state-metrics" })); + it('kube-prometheus-stack-grafana pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-grafana" })); + it('kube-prometheus-stack-operator pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-operator" })); +}); + +describe("kube-prometheus-stack daemonset is ready", () => { + it('kube-prometheus-stack-prometheus-node-exporter pods are ready', () => helpers.checkDaemonSet({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-prometheus-node-exporter" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/gloo-platform-observability/tests/grafana-installed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +PROD_PROMETHEUS_IP=$(kubectl get svc kube-prometheus-stack-prometheus -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +helm upgrade --install gloo-platform-agent gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --reuse-values \ + --version 2.5.12 \ + --values - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication not allowed", () => { + it("Productpage can NOT send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get', timeout=5); print(r.text)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("User-Agent"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send GET requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Productpage can't send POST requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.post('http://httpbin.org/post'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-only-get-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete networkpolicy restrict-egress +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete externalservice httpbin +kubectl --context ${CLUSTER1} -n istio-gateways delete accesspolicy allow-get-httpbin diff --git a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/scripts/register-domain.sh b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/scripts/register-domain.sh +++ b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/chai-exec.js b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/chai-exec.js +++ b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/chai-http.js b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/chai-http.js +++ b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/proxies-changes.test.js.liquid b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/enterprise/2-5/mgmt-as-workload/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/enterprise/2-5/openshift/default/README.md b/gloo-mesh/enterprise/2-5/openshift/default/README.md index d122712076..8bccddde57 100644 --- a/gloo-mesh/enterprise/2-5/openshift/default/README.md +++ b/gloo-mesh/enterprise/2-5/openshift/default/README.md @@ -16,22 +16,21 @@ source ./scripts/assert.sh ## Table of Contents * [Introduction](#introduction) * [Lab 1 - Deploy the Kubernetes clusters manually](#lab-1---deploy-the-kubernetes-clusters-manually-) -* [Lab 2 - Deploy KinD clusters](#lab-2---deploy-kind-clusters-) -* [Lab 3 - Deploy and register Gloo Mesh](#lab-3---deploy-and-register-gloo-mesh-) -* [Lab 4 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-4---deploy-istio-using-gloo-mesh-lifecycle-manager-) -* [Lab 5 - Deploy the Bookinfo demo app](#lab-5---deploy-the-bookinfo-demo-app-) -* [Lab 6 - Deploy the httpbin demo app](#lab-6---deploy-the-httpbin-demo-app-) -* [Lab 7 - Deploy Gloo Mesh Addons](#lab-7---deploy-gloo-mesh-addons-) -* [Lab 8 - Create the gateways workspace](#lab-8---create-the-gateways-workspace-) -* [Lab 9 - Create the bookinfo workspace](#lab-9---create-the-bookinfo-workspace-) -* [Lab 10 - Expose the productpage through a gateway](#lab-10---expose-the-productpage-through-a-gateway-) -* [Lab 11 - Traffic policies](#lab-11---traffic-policies-) -* [Lab 12 - Create the Root Trust Policy](#lab-12---create-the-root-trust-policy-) -* [Lab 13 - Leverage Virtual Destinations for east west communications](#lab-13---leverage-virtual-destinations-for-east-west-communications-) -* [Lab 14 - Zero trust](#lab-14---zero-trust-) -* [Lab 15 - See how Gloo Platform can help with observability](#lab-15---see-how-gloo-platform-can-help-with-observability-) -* [Lab 16 - VM integration with Spire](#lab-16---vm-integration-with-spire-) -* [Lab 17 - Securing the egress traffic](#lab-17---securing-the-egress-traffic-) +* [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) +* [Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-3---deploy-istio-using-gloo-mesh-lifecycle-manager-) +* [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) +* [Lab 5 - Deploy the httpbin demo app](#lab-5---deploy-the-httpbin-demo-app-) +* [Lab 6 - Deploy Gloo Mesh Addons](#lab-6---deploy-gloo-mesh-addons-) +* [Lab 7 - Create the gateways workspace](#lab-7---create-the-gateways-workspace-) +* [Lab 8 - Create the bookinfo workspace](#lab-8---create-the-bookinfo-workspace-) +* [Lab 9 - Expose the productpage through a gateway](#lab-9---expose-the-productpage-through-a-gateway-) +* [Lab 10 - Traffic policies](#lab-10---traffic-policies-) +* [Lab 11 - Create the Root Trust Policy](#lab-11---create-the-root-trust-policy-) +* [Lab 12 - Leverage Virtual Destinations for east west communications](#lab-12---leverage-virtual-destinations-for-east-west-communications-) +* [Lab 13 - Zero trust](#lab-13---zero-trust-) +* [Lab 14 - See how Gloo Platform can help with observability](#lab-14---see-how-gloo-platform-can-help-with-observability-) +* [Lab 15 - VM integration with Spire](#lab-15---vm-integration-with-spire-) +* [Lab 16 - Securing the egress traffic](#lab-16---securing-the-egress-traffic-) @@ -105,89 +104,7 @@ kubectl config use-context ${MGMT} -## Lab 2 - Deploy KinD clusters - - -Clone this repository and go to the directory where this `README.md` file is. - -Set the context environment variables: - -```bash -export MGMT=mgmt -export CLUSTER1=cluster1 -export CLUSTER2=cluster2 -``` - -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): - -```bash -./scripts/deploy-aws-with-calico.sh 1 mgmt -./scripts/deploy-aws-with-calico.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws-with-calico.sh 3 cluster2 us-west us-west-2 -``` - -Then run the following commands to wait for all the Pods to be ready: - -```bash -./scripts/check.sh mgmt -./scripts/check.sh cluster1 -./scripts/check.sh cluster2 -``` - -**Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. - -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: - -``` -CURRENT NAME CLUSTER AUTHINFO NAMESPACE - cluster1 kind-cluster1 cluster1 -* cluster2 kind-cluster2 cluster2 - mgmt kind-mgmt kind-mgmt -``` - -Run the following command to make `mgmt` the current cluster. - -```bash -kubectl config use-context ${MGMT} -``` - - - - -## Lab 3 - Deploy and register Gloo Mesh +## Lab 2 - Deploy and register Gloo Mesh [VIDEO LINK](https://youtu.be/djfFiepK4GY "Video Link") @@ -227,6 +144,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -536,7 +454,8 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 4 - Deploy Istio using Gloo Mesh Lifecycle Manager + +## Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") We are going to deploy Istio using Gloo Mesh Lifecycle Manager. @@ -1167,7 +1086,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 5 - Deploy the Bookinfo demo app +## Lab 4 - Deploy the Bookinfo demo app [VIDEO LINK](https://youtu.be/nzYcrjalY5A "Video Link") We're going to deploy the bookinfo application to demonstrate several features of Gloo Mesh. @@ -1327,7 +1246,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 6 - Deploy the httpbin demo app +## Lab 5 - Deploy the httpbin demo app [VIDEO LINK](https://youtu.be/w1xB-o_gHs0 "Video Link") @@ -1517,7 +1436,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 7 - Deploy Gloo Mesh Addons +## Lab 6 - Deploy Gloo Mesh Addons [VIDEO LINK](https://youtu.be/_rorug_2bk8 "Video Link") To use the Gloo Mesh Gateway advanced features (external authentication, rate limiting, ...), you need to install the Gloo Mesh addons. @@ -1697,7 +1616,7 @@ This is what the environment looks like now: -## Lab 8 - Create the gateways workspace +## Lab 7 - Create the gateways workspace [VIDEO LINK](https://youtu.be/QeVBH0eswWw "Video Link") We're going to create a workspace for the team in charge of the Gateways. @@ -1760,7 +1679,7 @@ The Gateway team has decided to import the following from the workspaces that ha -## Lab 9 - Create the bookinfo workspace +## Lab 8 - Create the bookinfo workspace We're going to create a workspace for the team in charge of the Bookinfo application. @@ -1835,7 +1754,7 @@ This is how the environment looks like with the workspaces: -## Lab 10 - Expose the productpage through a gateway +## Lab 9 - Expose the productpage through a gateway [VIDEO LINK](https://youtu.be/emyIu99AOOA "Video Link") In this step, we're going to expose the `productpage` service through the Ingress Gateway using Gloo Mesh. @@ -2104,7 +2023,7 @@ This diagram shows the flow of the request (through the Istio Ingress Gateway): -## Lab 11 - Traffic policies +## Lab 10 - Traffic policies [VIDEO LINK](https://youtu.be/ZBdt8WA0U64 "Video Link") We're going to use Gloo Mesh policies to inject faults and configure timeouts. @@ -2282,7 +2201,7 @@ kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable reviews -## Lab 12 - Create the Root Trust Policy +## Lab 11 - Create the Root Trust Policy [VIDEO LINK](https://youtu.be/-A2U2fYYgrU "Video Link") To allow secured (end-to-end mTLS) cross cluster communications, we need to make sure the certificates issued by the Istio control plane on each cluster are signed with intermediate certificates which have a common root CA. @@ -2418,7 +2337,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 13 - Leverage Virtual Destinations for east west communications +## Lab 12 - Leverage Virtual Destinations for east west communications We can create a Virtual Destination which will be composed of the `reviews` services running in both clusters. @@ -2677,7 +2596,7 @@ kubectl --context ${CLUSTER1} -n bookinfo-backends delete outlierdetectionpolicy -## Lab 14 - Zero trust +## Lab 13 - Zero trust [VIDEO LINK](https://youtu.be/BiaBlUaplEs "Video Link") In the previous step, we federated multiple meshes and established a shared root CA for a shared identity domain. @@ -3033,7 +2952,7 @@ kubectl --context ${CLUSTER1} delete accesspolicies -n bookinfo-frontends --all -## Lab 15 - See how Gloo Platform can help with observability +## Lab 14 - See how Gloo Platform can help with observability [VIDEO LINK](https://youtu.be/UhWsk4YnOy0 "Video Link") # Observability with Gloo Platform @@ -3265,7 +3184,7 @@ kubectl --context ${MGMT} label -n monitoring cm istio-control-plane-dashboard g -## Lab 16 - VM integration with Spire +## Lab 15 - VM integration with Spire Let's see how we can configure a VM to be part of the Mesh. @@ -3578,6 +3497,8 @@ ATTEMPTS=0 while [ $ATTEMPTS -lt $MAX_ATTEMPTS ]; do kubectl --context ${CLUSTER1} -n gloo-mesh rollout restart deploy gloo-spire-server kubectl --context ${CLUSTER1} -n gloo-mesh rollout status deploy gloo-spire-server + sleep 30 + export JOIN_TOKEN=$(meshctl external-workload gen-token --kubecontext ${CLUSTER1} --trust-domain ${CLUSTER1} --ttl 3600 --ext-workload virtualmachines/${VM_APP} --plain=true | grep -ioE "${uuid_regex_partial}") timeout 1m docker exec vm1 meshctl ew onboard --install \ --attestor token \ @@ -3798,7 +3719,7 @@ docker rm -f vm1 -## Lab 17 - Securing the egress traffic +## Lab 16 - Securing the egress traffic [VIDEO LINK](https://youtu.be/tQermml1Ryo "Video Link") In this step, we're going to secure the egress traffic. diff --git a/gloo-mesh/enterprise/2-5/openshift/default/package.json b/gloo-mesh/enterprise/2-5/openshift/default/package.json new file mode 100644 index 0000000000..9097b6c279 --- /dev/null +++ b/gloo-mesh/enterprise/2-5/openshift/default/package.json @@ -0,0 +1,44 @@ +{ + "name": "procgen", + "version": "0.0.1", + "description": "Solo Procedure Generator", + "main": "procgen.js", + "scripts": { + "cilium-intro-default": "node procgen.js -d cilium-intro -f cilium-intro/workshops/default.yaml -o dist/cilium-intro-default --overwrite", + "ebpf-default": "node procgen.js -d ebpf -f ebpf/workshops/default.yaml -o dist/ebpf-default --overwrite", + "developing-ebpf-apps": "node procgen.js -d developing-ebpf-apps -f developing-ebpf-apps/workshops/instruqt.yaml -o dist/developing-ebpf-apps --overwrite", + "gloo-edge-md": "node procgen.js -d gloo-edge -f gloo-edge/workshops/default.yaml -o dist/edge-default --overwrite", + "gloo-edge-default": "node procgen.js -d gloo-edge -f gloo-edge/workshops/default.yaml -o dist/edge-default --overwrite", + "gloo-edge-beta": "node procgen.js -d gloo-edge -f gloo-edge/workshops/beta.yaml -o dist/edge-beta --overwrite", + "test:patches": "mocha testing/deepMerge.spec.js" + }, + "dependencies": { + "@jsdevtools/chai-exec": "^2.1.1", + "@kubernetes/client-node": "^0.22.1", + "ascii-table": "^0.0.9", + "chai": "^4.5.0", + "chai-http": "^4.4.0", + "deep-diff": "^1.0.2", + "deep-object-diff": "^1.1.9", + "fs-extra": "^11.2.0", + "glob": "^10.4.5", + "jest-diff": "^29.7.0", + "js-yaml": "^4.1.0", + "json-diff": "^1.0.6", + "liquidjs": "^10.18.0", + "lodash": "^4.17.21", + "markdown-it": "^13.0.2", + "mocha": "^10.7.3", + "prepend-file": "^2.0.1", + "puppeteer": "^22.15.0", + "puppeteer-extra": "^3.3.6", + "puppeteer-extra-plugin-user-preferences": "^2.4.1", + "semver": "^7.6.3", + "sharp": "^0.33.5", + "strip-ansi": "^7.1.0", + "tesseract.js": "^4.1.4", + "yaml": "^2.6.1", + "yargs": "^17.7.2", + "zod": "^3.23.8" + } +} diff --git a/gloo-mesh/enterprise/2-5/openshift/default/run.sh b/gloo-mesh/enterprise/2-5/openshift/default/run.sh new file mode 100644 index 0000000000..a9edd4938b --- /dev/null +++ b/gloo-mesh/enterprise/2-5/openshift/default/run.sh @@ -0,0 +1,2719 @@ +#!/usr/bin/env bash +source /root/.env 2>/dev/null || true +source ./scripts/assert.sh +export MGMT= +export CLUSTER1= +export CLUSTER2= +kubectl config use-context ${MGMT} +export GLOO_MESH_VERSION=v2.5.12 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +# To allow running the OTel collector as privileged on Openshift +oc --context ${CLUSTER1} adm policy add-scc-to-user privileged -z gloo-telemetry-collector -n gloo-mesh +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.5.12 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.5.12 \ + -f -< ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.5.12 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +oc --context ${CLUSTER1} adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo-frontends +oc --context ${CLUSTER1} adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo-backends + +cat </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +oc --context ${CLUSTER2} adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo-frontends +oc --context ${CLUSTER2} adm policy add-scc-to-group anyuid system:serviceaccounts:bookinfo-backends + +cat </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +oc --context ${CLUSTER1} adm policy add-scc-to-group anyuid system:serviceaccounts:httpbin + +cat </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n httpbin get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER1} label namespace gloo-mesh-addons istio.io/rev=1-20 --overwrite +kubectl --context ${CLUSTER2} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER2} label namespace gloo-mesh-addons istio.io/rev=1-20 --overwrite +oc --context ${CLUSTER1} adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons + +cat < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 deployment", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); +describe("Gloo Platform add-ons cluster2 deployment", () => { + let cluster = process.env.CLUSTER2 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-deployments.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 service", () => { + let cluster = process.env.CLUSTER1 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); +describe("Gloo Platform add-ons cluster2 service", () => { + let cluster = process.env.CLUSTER2 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-services.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const chaiHttp = require("chai-http"); +chai.use(chaiHttp); + +process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +let searchTest="Sorry, product reviews are currently unavailable for this book."; + +describe("Reviews shouldn't be available", () => { + it("Checking text '" + searchTest + "' in cluster1", async () => { + await chai.request(`https://cluster1-bookinfo.example.com`) + .get('/productpage') + .send() + .then((res) => { + expect(res.text).to.contain(searchTest); + }); + }); + +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/traffic-policies/tests/traffic-policies-reviews-unavailable.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete faultinjectionpolicy ratings-fault-injection +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable ratings +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete retrytimeoutpolicy reviews-request-timeout +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable reviews +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("cacerts secrets have been created", () => { + const clusters = [process.env.CLUSTER1, process.env.CLUSTER2]; + clusters.forEach(cluster => { + it('Secret is present in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "istio-system", k8sType: "secret", k8sObj: "cacerts" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/cacert-secrets-created.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +printf "Waiting for all pods needed for the test..." +printf "\n" +kubectl --context ${CLUSTER1} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER1} rollout status -n bookinfo-backends {} +kubectl --context ${CLUSTER2} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER2} rollout status -n bookinfo-backends {} +printf "\n" +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +const testerPodName = "tester-root-trust-policy"; +before(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + done(); +}); +after(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + done(); +}); + +describe("Certificate issued by Gloo Mesh", () => { + var expectedOutput = "i:O=gloo-mesh"; + + it('Gloo mesh is the organization for ' + process.env.CLUSTER1 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER1} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + + + it('Gloo mesh is the organization for ' + process.env.CLUSTER2 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER2} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/certificate-issued-by-gloo-mesh.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v1 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v2 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v1 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v2 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualdestination reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete failoverpolicy failover +kubectl --context ${CLUSTER1} -n bookinfo-backends delete outlierdetectionpolicy outlier-detection +(timeout 2s kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) || (kubectl --context ${CLUSTER1} -n httpbin rollout restart deploy/in-mesh && kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + + it("Response code shouldn't be 200 accessing ratings", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://ratings.bookinfo-backends:9080/ratings/0', timeout=3); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); + + it("Response code should be 200 accessing reviews with GET", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Response code should be 403 accessing reviews with HEAD", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.head('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); + + it("Response code should be 200 accessing details", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://details.bookinfo-backends:9080/details/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/bookinfo-access.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("kube-prometheus-stack deployments are ready", () => { + it('kube-prometheus-stack-kube-state-metrics pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-kube-state-metrics" })); + it('kube-prometheus-stack-grafana pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-grafana" })); + it('kube-prometheus-stack-operator pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-operator" })); +}); + +describe("kube-prometheus-stack daemonset is ready", () => { + it('kube-prometheus-stack-prometheus-node-exporter pods are ready', () => helpers.checkDaemonSet({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-prometheus-node-exporter" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/gloo-platform-observability/tests/grafana-installed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +PROD_PROMETHEUS_IP=$(kubectl get svc kube-prometheus-stack-prometheus -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --reuse-values \ + --version 2.5.12 \ + --values - < /vm/resolv.conf" +docker exec vm1 cp /vm/resolv.conf /etc/resolv.conf +docker exec vm1 apt update -y +docker exec vm1 apt-get install -y iputils-ping curl iproute2 iptables python3 sudo dnsutils +cluster1_cidr=$(kubectl --context ${CLUSTER1} -n kube-system get pod -l component=kube-controller-manager -o jsonpath='{.items[0].spec.containers[0].command}' | jq -r '.[] | select(. | startswith("--cluster-cidr="))' | cut -d= -f2) +cluster2_cidr=$(kubectl --context ${CLUSTER2} -n kube-system get pod -l component=kube-controller-manager -o jsonpath='{.items[0].spec.containers[0].command}' | jq -r '.[] | select(. | startswith("--cluster-cidr="))' | cut -d= -f2) + +docker exec vm1 $(kubectl --context ${CLUSTER1} get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{"'${cluster1_cidr}' via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}') +docker exec vm1 $(kubectl --context ${CLUSTER2} get nodes -o=jsonpath='{range .items[*]}{"ip route add "}{"'${cluster2_cidr}' via "}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}') +docker cp $HOME/.gloo-mesh/bin/meshctl vm1:/usr/local/bin/ +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The VM should be able to access the productpage service", () => { + const command = 'docker exec vm1 curl -s -o /dev/null -w "%{http_code}" productpage.bookinfo-frontends.svc.cluster.local:9080/productpage'; + it("Got the expected status code 200", () => helpers.genericCommand({ command: command, responseContains: "200" })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/vm-access-productpage.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +docker exec -d vm1 python3 -m http.server 9999 +kubectl --context ${CLUSTER1} -n bookinfo-frontends exec $(kubectl --context ${CLUSTER1} -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}') -- python -c "import requests; r = requests.get('http://${VM_APP}.virtualmachines.ext.cluster.local:9999'); print(r.text)" +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should be able to access the VM", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://" + process.env.VM_APP + ".virtualmachines.ext.cluster.local:9999'); print(r.status_code)\""; + it('Got the expected status code 200', () => helpers.genericCommand({ command: command, responseContains: "200" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/productpage-access-vm.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +docker exec vm1 apt-get update +docker exec vm1 apt-get install -y mariadb-server +docker exec vm1 sed -i '/bind-address/c\bind-address = 0.0.0.0' /etc/mysql/mariadb.conf.d/50-server.cnf +docker exec vm1 systemctl start mysql + +docker exec -i vm1 mysql < ./test.js +const helpers = require('./tests/chai-http'); + +describe("The ratings service should use the database running on the VM", () => { + it('Got reviews v2 with ratings in cluster1', () => helpers.checkBody({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', body: 'text-black', match: true })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/vm-integration-spire/tests/ratings-using-vm.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n "${VM_NAMESPACE}" delete externalworkload ${VM_APP} +kubectl --context ${CLUSTER1} delete namespace "${VM_NAMESPACE}" +kubectl --context ${CLUSTER1} -n bookinfo-backends delete -f https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql-vm.yaml +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/ratings-v1 --replicas=1 +kubectl apply --context ${MGMT} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication not allowed", () => { + it("Productpage can NOT send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get', timeout=5); print(r.text)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("User-Agent"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send GET requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Productpage can't send POST requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.post('http://httpbin.org/post'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-only-get-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete networkpolicy restrict-egress +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete externalservice httpbin +kubectl --context ${CLUSTER1} -n istio-gateways delete accesspolicy allow-get-httpbin diff --git a/gloo-mesh/enterprise/2-5/openshift/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/enterprise/2-5/openshift/default/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/enterprise/2-5/openshift/default/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/enterprise/2-5/openshift/default/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/enterprise/2-5/openshift/default/scripts/register-domain.sh b/gloo-mesh/enterprise/2-5/openshift/default/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/enterprise/2-5/openshift/default/scripts/register-domain.sh +++ b/gloo-mesh/enterprise/2-5/openshift/default/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/enterprise/2-5/openshift/default/tests/chai-exec.js b/gloo-mesh/enterprise/2-5/openshift/default/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/enterprise/2-5/openshift/default/tests/chai-exec.js +++ b/gloo-mesh/enterprise/2-5/openshift/default/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/enterprise/2-5/openshift/default/tests/chai-http.js b/gloo-mesh/enterprise/2-5/openshift/default/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/enterprise/2-5/openshift/default/tests/chai-http.js +++ b/gloo-mesh/enterprise/2-5/openshift/default/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/enterprise/2-5/openshift/default/tests/proxies-changes.test.js.liquid b/gloo-mesh/enterprise/2-5/openshift/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/enterprise/2-5/openshift/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/enterprise/2-6/airgap/default/README.md b/gloo-mesh/enterprise/2-6/airgap/default/README.md index 1965382b0c..145dbbbb66 100644 --- a/gloo-mesh/enterprise/2-6/airgap/default/README.md +++ b/gloo-mesh/enterprise/2-6/airgap/default/README.md @@ -9,13 +9,13 @@ source ./scripts/assert.sh -#
Gloo Mesh Enterprise (2.6.6)
+#
Gloo Mesh Enterprise (2.6.7)
## Table of Contents * [Introduction](#introduction) -* [Lab 1 - Deploy KinD clusters](#lab-1---deploy-kind-clusters-) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) * [Lab 2 - Prepare airgap environment](#lab-2---prepare-airgap-environment-) * [Lab 3 - Deploy and register Gloo Mesh](#lab-3---deploy-and-register-gloo-mesh-) * [Lab 4 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-4---deploy-istio-using-gloo-mesh-lifecycle-manager-) @@ -69,7 +69,7 @@ You can find more information about Gloo Mesh Enterprise in the official documen -## Lab 1 - Deploy KinD clusters +## Lab 1 - Deploy KinD Cluster(s) Clone this repository and go to the directory where this `README.md` file is. @@ -82,14 +82,13 @@ export CLUSTER1=cluster1 export CLUSTER2=cluster2 ``` -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): +Deploy the KinD clusters: ```bash -./scripts/deploy-aws-with-calico.sh 1 mgmt -./scripts/deploy-aws-with-calico.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws-with-calico.sh 3 cluster2 us-west us-west-2 +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh ``` - Then run the following commands to wait for all the Pods to be ready: ```bash @@ -100,27 +99,8 @@ Then run the following commands to wait for all the Pods to be ready: **Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: ``` CURRENT NAME CLUSTER AUTHINFO NAMESPACE @@ -139,7 +119,8 @@ cat <<'EOF' > ./test.js const helpers = require('./tests/chai-exec'); describe("Clusters are healthy", () => { - const clusters = [process.env.MGMT, process.env.CLUSTER1, process.env.CLUSTER2]; + const clusters = ["mgmt", "cluster1", "cluster2"]; + clusters.forEach(cluster => { it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); }); @@ -151,6 +132,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 2 - Prepare airgap environment Set the registry variable: @@ -175,14 +157,14 @@ docker.io/istio/examples-bookinfo-reviews-v1:1.20.2 docker.io/istio/examples-bookinfo-reviews-v2:1.20.2 docker.io/istio/examples-bookinfo-reviews-v3:1.20.2 docker.io/kennethreitz/httpbin -gcr.io/gloo-mesh/ext-auth-service:0.58.4 -gcr.io/gloo-mesh/gloo-mesh-agent:2.6.6 -gcr.io/gloo-mesh/gloo-mesh-apiserver:2.6.6 -gcr.io/gloo-mesh/gloo-mesh-envoy:2.6.6 -gcr.io/gloo-mesh/gloo-mesh-mgmt-server:2.6.6 -gcr.io/gloo-mesh/gloo-mesh-spire-controller:2.6.6 -gcr.io/gloo-mesh/gloo-mesh-ui:2.6.6 -gcr.io/gloo-mesh/gloo-otel-collector:2.6.6 +gcr.io/gloo-mesh/ext-auth-service:0.58.5 +gcr.io/gloo-mesh/gloo-mesh-agent:2.6.7 +gcr.io/gloo-mesh/gloo-mesh-apiserver:2.6.7 +gcr.io/gloo-mesh/gloo-mesh-envoy:2.6.7 +gcr.io/gloo-mesh/gloo-mesh-mgmt-server:2.6.7 +gcr.io/gloo-mesh/gloo-mesh-spire-controller:2.6.7 +gcr.io/gloo-mesh/gloo-mesh-ui:2.6.7 +gcr.io/gloo-mesh/gloo-otel-collector:2.6.7 gcr.io/gloo-mesh/kubectl:1.16.4 gcr.io/gloo-mesh/prometheus:v2.53.0 gcr.io/gloo-mesh/rate-limiter:0.12.2 @@ -220,6 +202,8 @@ cat images.txt | while read image; do docker tag $id ${registry}/$dst_dev docker push ${registry}/$dst_dev done + +export otel_collector_image=$(curl --silent -X GET http://${registry}/v2/_catalog | jq -er '.repositories[] | select ((.|contains("otel-collector")) and (.|startswith("gloo-mesh/")))') ``` @@ -231,7 +215,7 @@ done Before we get started, let's install the `meshctl` CLI: ```bash -export GLOO_MESH_VERSION=v2.6.6 +export GLOO_MESH_VERSION=v2.6.7 curl -sL https://run.solo.io/meshctl/install | sh - export PATH=$HOME/.gloo-mesh/bin:$PATH ``` @@ -264,6 +248,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -273,13 +258,13 @@ helm upgrade --install gloo-platform-crds gloo-platform-crds \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 + --version 2.6.7 helm upgrade --install gloo-platform gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") @@ -1508,7 +1494,7 @@ helm upgrade --install gloo-platform gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh-addons \ --kube-context ${CLUSTER1} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< ```shell -export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.6.6/gloo-workload-agent.deb +export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.6.7/gloo-workload-agent.deb export ISTIO_URL=https://storage.googleapis.com/solo-workshops/istio-binaries/1.23.1/istio-sidecar.deb docker exec vm1 meshctl ew onboard --install \ --attestor token \ @@ -3552,7 +3538,7 @@ docker exec vm1 meshctl ew onboard --install \ --ext-workload virtualmachines/${VM_APP} ``` + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -199,13 +182,13 @@ helm upgrade --install gloo-platform-crds gloo-platform-crds \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 + --version 2.6.7 helm upgrade --install gloo-platform gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") @@ -1389,7 +1373,7 @@ helm upgrade --install gloo-platform gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh-addons \ --kube-context ${CLUSTER1} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< ```shell -export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.6.6/gloo-workload-agent.deb +export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.6.7/gloo-workload-agent.deb export ISTIO_URL=https://storage.googleapis.com/solo-workshops/istio-binaries/1.23.1/istio-sidecar.deb docker exec vm1 meshctl ew onboard --install \ --attestor token \ @@ -3404,7 +3388,7 @@ docker exec vm1 meshctl ew onboard --install \ --ext-workload virtualmachines/${VM_APP} ``` ```shell -export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.6.6/gloo-workload-agent.deb +export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.6.7/gloo-workload-agent.deb export ISTIO_URL=https://storage.googleapis.com/solo-workshops/istio-binaries/1.23.1/istio-sidecar.deb docker exec vm1 meshctl ew onboard --install \ --attestor token \ @@ -5067,7 +5050,7 @@ docker exec vm1 meshctl ew onboard --install \ --ext-workload virtualmachines/${VM_APP} ``` + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -195,13 +165,13 @@ helm upgrade --install gloo-platform-crds gloo-platform-crds \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 + --version 2.6.7 helm upgrade --install gloo-platform-mgmt gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") @@ -1338,7 +1309,7 @@ helm upgrade --install gloo-platform gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh-addons \ --kube-context ${CLUSTER1} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -</dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null || true +source ./scripts/assert.sh +export MGMT=cluster1 +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Clusters are healthy", () => { + const clusters = ["cluster1", "cluster2"]; + + clusters.forEach(cluster => { + it(`Cluster ${cluster} is healthy`, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "default", k8sType: "service", k8sObj: "kubernetes" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-kind-clusters/tests/cluster-healthy.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export GLOO_MESH_VERSION=v2.6.7 +curl -sL https://run.solo.io/meshctl/install | sh - +export PATH=$HOME/.gloo-mesh/bin:$PATH +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; + +describe("Required environment variables should contain value", () => { + afterEach(function(done){ + if(this.currentTest.currentRetry() > 0){ + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } + }); + + it("Context environment variables should not be empty", () => { + expect(process.env.MGMT).not.to.be.empty + expect(process.env.CLUSTER1).not.to.be.empty + expect(process.env.CLUSTER2).not.to.be.empty + }); + + it("Gloo Mesh licence environment variables should not be empty", () => { + expect(process.env.GLOO_MESH_LICENSE_KEY).not.to.be.empty + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${MGMT} create ns gloo-mesh + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.6.7 + +helm upgrade --install gloo-platform-mgmt gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${MGMT} \ + --version 2.6.7 \ + -f -< ./test.js + +const helpers = require('./tests/chai-exec'); + +describe("MGMT server is healthy", () => { + let cluster = process.env.MGMT; + let deployments = ["gloo-mesh-mgmt-server","gloo-mesh-redis","gloo-telemetry-gateway","prometheus-server"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/check-deployment.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/get-gloo-mesh-mgmt-server-ip.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GLOO_MESH + "' can be resolved in DNS", () => { + it(process.env.HOST_GLOO_MESH + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GLOO_MESH, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.6.7 + +helm upgrade --install gloo-platform-agent gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.6.7 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +curl -L https://istio.io/downloadIstio | sh - + +if [ -d "istio-"*/ ]; then + cd istio-*/ + export PATH=$PWD/bin:$PATH + cd .. +fi +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-version.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns istio-gateways + +kubectl apply --context ${CLUSTER1} -f - < ./test.js + +const helpers = require('./tests/chai-exec'); + +const chaiExec = require("@jsdevtools/chai-exec"); +const helpersHttp = require('./tests/chai-http'); +const chai = require("chai"); +const expect = chai.expect; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +describe("Checking Istio installation", function() { + it('istiod pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER1, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER1, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it('istiod pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-system", labels: "app=istiod", instances: 1 })); + it('gateway pods are ready in cluster ' + process.env.CLUSTER2, () => helpers.checkDeploymentsWithLabels({ context: process.env.CLUSTER2, namespace: "istio-gateways", labels: "app=istio-ingressgateway", instances: 2 })); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER1, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER1 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); + it("Gateways have an ip attached in cluster " + process.env.CLUSTER2, () => { + let cli = chaiExec("kubectl --context " + process.env.CLUSTER2 + " -n istio-gateways get svc -l app=istio-ingressgateway -o jsonpath='{.items}'"); + cli.stderr.should.be.empty; + let deployments = JSON.parse(cli.stdout.slice(1,-1)); + expect(deployments).to.have.lengthOf(2); + deployments.forEach((deployment) => { + expect(deployment.status.loadBalancer).to.have.property("ingress"); + }); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/istio-lifecycle-manager-install/tests/istio-ready.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +timeout 2m bash -c "until [[ \$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o json | jq '.items[0].status.loadBalancer | length') -gt 0 ]]; do + sleep 1 +done" +export HOST_GW_CLUSTER1="$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +export HOST_GW_CLUSTER2="$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}')" +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER1 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER1 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER1, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const dns = require('dns'); +const chaiHttp = require("chai-http"); +const chai = require("chai"); +const expect = chai.expect; +chai.use(chaiHttp); +const { waitOnFailedTest } = require('./tests/utils'); + +afterEach(function(done) { waitOnFailedTest(done, this.currentTest.currentRetry())}); + +describe("Address '" + process.env.HOST_GW_CLUSTER2 + "' can be resolved in DNS", () => { + it(process.env.HOST_GW_CLUSTER2 + ' can be resolved', (done) => { + return dns.lookup(process.env.HOST_GW_CLUSTER2, (err, address, family) => { + expect(address).to.be.an.ip; + done(); + }); + }); +}); +EOF +echo "executing test ./gloo-mesh-2-0/tests/can-resolve.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns bookinfo-frontends +kubectl --context ${CLUSTER1} create ns bookinfo-backends +kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml + +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3 +kubectl --context ${CLUSTER1} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml + +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1} +kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1} +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER1} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 4 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} create ns bookinfo-frontends +kubectl --context ${CLUSTER2} create ns bookinfo-backends +kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-23 --overwrite + +# Deploy the frontend bookinfo service in the bookinfo-frontends namespace +kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f data/steps/deploy-bookinfo/productpage-v1.yaml +# Deploy the backend bookinfo services in the bookinfo-backends namespace for all versions +kubectl --context ${CLUSTER2} -n bookinfo-backends apply \ + -f data/steps/deploy-bookinfo/details-v1.yaml \ + -f data/steps/deploy-bookinfo/ratings-v1.yaml \ + -f data/steps/deploy-bookinfo/reviews-v1-v2.yaml \ + -f data/steps/deploy-bookinfo/reviews-v3.yaml +# Update the reviews service to display where it is coming from +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2} +kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2} + +echo -n Waiting for bookinfo pods to be ready... +timeout -v 5m bash -c " +until [[ \$(kubectl --context ${CLUSTER2} -n bookinfo-frontends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 1 && \\ + \$(kubectl --context ${CLUSTER2} -n bookinfo-backends get deploy -o json | jq '[.items[].status.readyReplicas] | add') -eq 5 ]] 2>/dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Bookinfo app", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); + cluster = process.env.CLUSTER2 + deployments = ["productpage-v1"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-frontends", k8sObj: deploy })); + }); + deployments = ["ratings-v1", "details-v1", "reviews-v1", "reviews-v2", "reviews-v3"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "bookinfo-backends", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/deploy-bookinfo/tests/check-bookinfo.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create ns httpbin +kubectl apply --context ${CLUSTER1} -f - </dev/null +do + sleep 1 + echo -n . +done" +echo +kubectl --context ${CLUSTER1} -n httpbin get pods +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("httpbin app", () => { + let cluster = process.env.CLUSTER1 + + let deployments = ["not-in-mesh", "in-mesh"]; + + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "httpbin", k8sObj: deploy })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/deploy-httpbin/tests/check-httpbin.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER1} label namespace gloo-mesh-addons istio.io/rev=1-23 --overwrite +kubectl --context ${CLUSTER2} create namespace gloo-mesh-addons +kubectl --context ${CLUSTER2} label namespace gloo-mesh-addons istio.io/rev=1-23 --overwrite +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh-addons \ + --kube-context ${CLUSTER1} \ + --version 2.6.7 \ + -f -< ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 deployment", () => { + let cluster = process.env.CLUSTER1 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); +describe("Gloo Platform add-ons cluster2 deployment", () => { + let cluster = process.env.CLUSTER2 + let deployments = ["ext-auth-service", "rate-limiter"]; + deployments.forEach(deploy => { + it(deploy + ' pods are ready in ' + cluster, () => helpers.checkDeployment({ context: cluster, namespace: "gloo-mesh-addons", k8sObj: deploy })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-deployments.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gloo Platform add-ons cluster1 service", () => { + let cluster = process.env.CLUSTER1 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); +describe("Gloo Platform add-ons cluster2 service", () => { + let cluster = process.env.CLUSTER2 + let services = ["ext-auth-service", "rate-limiter"]; + services.forEach(service => { + it(service + ' exists in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "gloo-mesh-addons", k8sType: "service", k8sObj: service })); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-gloo-mesh-addons/tests/check-addons-services.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTP)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `http://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +openssl req -x509 -nodes -days 365 -newkey rsa:2048 \ + -keyout tls.key -out tls.crt -subj "/CN=*" +kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt + +kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \ + --from-file=tls.key=tls.key \ + --from-file=tls.crt=tls.crt +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-http'); + +describe("Productpage is available (HTTPS)", () => { + it('/productpage is available in cluster1', () => helpers.checkURL({ host: `https://cluster1-bookinfo.example.com`, path: '/productpage', retCode: 200 })); +}) +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/productpage-available-secure.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Otel metrics", () => { + it("cluster1 is sending metrics to telemetryGateway", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app.kubernetes.io/name=prometheus -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9090/api/v1/query?query=istio_requests_total" }).replaceAll("'", ""); + expect(command).to.contain("cluster\":\"cluster1"); + }); +}); + + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/gateway-expose/tests/otel-metrics.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const chaiHttp = require("chai-http"); +chai.use(chaiHttp); + +process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +let searchTest="Sorry, product reviews are currently unavailable for this book."; + +describe("Reviews shouldn't be available", () => { + it("Checking text '" + searchTest + "' in cluster1", async () => { + await chai.request(`https://cluster1-bookinfo.example.com`) + .get('/productpage') + .send() + .then((res) => { + expect(res.text).to.contain(searchTest); + }); + }); + +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/traffic-policies/tests/traffic-policies-reviews-unavailable.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete faultinjectionpolicy ratings-fault-injection +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable ratings +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete retrytimeoutpolicy reviews-request-timeout +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable reviews +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("cacerts secrets have been created", () => { + const clusters = [process.env.CLUSTER1, process.env.CLUSTER2]; + clusters.forEach(cluster => { + it('Secret is present in ' + cluster, () => helpers.k8sObjectIsPresent({ context: cluster, namespace: "istio-system", k8sType: "secret", k8sObj: "cacerts" })); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/cacert-secrets-created.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=150 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +printf "Waiting for all pods needed for the test..." +printf "\n" +kubectl --context ${CLUSTER1} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER1} rollout status -n bookinfo-backends {} +kubectl --context ${CLUSTER2} get deploy -n bookinfo-backends -oname|xargs -I {} kubectl --context ${CLUSTER2} rollout status -n bookinfo-backends {} +printf "\n" +cat <<'EOF' > ./test.js +const chaiExec = require("@jsdevtools/chai-exec"); +var chai = require('chai'); +var expect = chai.expect; +chai.use(chaiExec); + +afterEach(function (done) { + if (this.currentTest.currentRetry() > 0) { + process.stdout.write("."); + setTimeout(done, 1000); + } else { + done(); + } +}); + +const testerPodName = "tester-root-trust-policy"; +before(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh run --image=alpine/openssl:3.3.1 ${testerPodName} --command --wait=false -- sleep infinity`); + done(); +}); +after(function (done) { + chaiExec(`kubectl --context ${process.env.CLUSTER1} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + chaiExec(`kubectl --context ${process.env.CLUSTER2} -n gloo-mesh delete pod ${testerPodName} --wait=false`); + done(); +}); + +describe("Certificate issued by Gloo Mesh", () => { + var expectedOutput = "i:O=gloo-mesh"; + + it('Gloo mesh is the organization for ' + process.env.CLUSTER1 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER1} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + + + it('Gloo mesh is the organization for ' + process.env.CLUSTER2 + ' certificate', () => { + let cli = chaiExec(`kubectl --context ${process.env.CLUSTER2} exec -t -n gloo-mesh ${testerPodName} -- openssl s_client -showcerts -connect ratings.bookinfo-backends:9080 -alpn istio`); + + expect(cli).stdout.to.contain(expectedOutput); + expect(cli).stderr.not.to.be.empty; + }); + +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/root-trust-policy/tests/certificate-issued-by-gloo-mesh.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster1', () => helpers.genericCommand({ command: command, responseContains: "cluster1" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster1.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=0 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.spec.replicas}'=0 deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v1 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends scale deploy/reviews-v2 --replicas=1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v1 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deploy reviews-v2 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +cat <<'EOF' > ./test.js +const helpers = require('./tests/chai-exec'); + +describe("The productpage service should get responses from cluster2", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}' --context " + process.env.CLUSTER1 }).replaceAll("'", ""); + const command = "kubectl -n bookinfo-frontends exec " + podName + " --context " + process.env.CLUSTER1 + " -- python -c \"import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)\""; + it('Got a response from cluster2', () => helpers.genericCommand({ command: command, responseContains: "cluster2" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/east-west-virtual-destination/tests/reviews-from-cluster2.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context $CLUSTER1 -n bookinfo-frontends exec deploy/productpage-v1 -- python -c "import requests; r = requests.get('http://reviews.global:9080/reviews/0'); print(r.text)" +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v1 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends patch deployment reviews-v2 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]' +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v1 +kubectl --context ${CLUSTER1} -n bookinfo-backends rollout status deploy/reviews-v2 +kubectl --context ${CLUSTER1} -n bookinfo-backends delete virtualdestination reviews +kubectl --context ${CLUSTER1} -n bookinfo-backends delete failoverpolicy failover +kubectl --context ${CLUSTER1} -n bookinfo-backends delete outlierdetectionpolicy outlier-detection +(timeout 2s kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) || (kubectl --context ${CLUSTER1} -n httpbin rollout restart deploy/in-mesh && kubectl --context ${CLUSTER1} -n httpbin rollout status deploy/in-mesh) +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication allowed", () => { + it("Response code should be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/not-in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +cat <<'EOF' > ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Communication not allowed", () => { + it("Response code shouldn't be 200", () => { + const podName = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}'" }).replaceAll("'", ""); + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n httpbin debug -i -q " + podName + " --image=curlimages/curl -- curl -s -o /dev/null -w \"%{http_code}\" --max-time 3 http://reviews.bookinfo-backends:9080/reviews/0" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/in-mesh-to-in-mesh-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + + it("Response code shouldn't be 200 accessing ratings", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://ratings.bookinfo-backends:9080/ratings/0', timeout=3); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("200"); + }); + + it("Response code should be 200 accessing reviews with GET", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Response code should be 403 accessing reviews with HEAD", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.head('http://reviews.bookinfo-backends:9080/reviews/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); + + it("Response code should be 200 accessing details", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://details.bookinfo-backends:9080/details/0'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/httpbin/zero-trust/tests/bookinfo-access.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("kube-prometheus-stack deployments are ready", () => { + it('kube-prometheus-stack-kube-state-metrics pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-kube-state-metrics" })); + it('kube-prometheus-stack-grafana pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-grafana" })); + it('kube-prometheus-stack-operator pods are ready', () => helpers.checkDeployment({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-operator" })); +}); + +describe("kube-prometheus-stack daemonset is ready", () => { + it('kube-prometheus-stack-prometheus-node-exporter pods are ready', () => helpers.checkDaemonSet({ context: process.env.MGMT, namespace: "monitoring", k8sObj: "kube-prometheus-stack-prometheus-node-exporter" })); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/gloo-platform-observability/tests/grafana-installed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +PROD_PROMETHEUS_IP=$(kubectl get svc kube-prometheus-stack-prometheus -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +helm upgrade --install gloo-platform-agent gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --reuse-values \ + --version 2.6.7 \ + --values - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${MGMT} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication not allowed", () => { + it("Productpage can NOT send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get', timeout=5); print(r.text)\"" }).replaceAll("'", ""); + expect(command).not.to.contain("User-Agent"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-not-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl apply --context ${CLUSTER1} -f - < ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); + +describe("Communication status", () => { + it("Productpage can send GET requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.get('http://httpbin.org/get'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("200"); + }); + + it("Productpage can't send POST requests to httpbin.org", () => { + const command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.CLUSTER1 + " -n bookinfo-frontends exec deploy/productpage-v1 -- python -c \"import requests; r = requests.post('http://httpbin.org/post'); print(r.status_code)\"" }).replaceAll("'", ""); + expect(command).to.contain("403"); + }); +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/apps/bookinfo/secure-egress/tests/productpage-to-httpbin-only-get-allowed.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete networkpolicy restrict-egress +kubectl --context ${CLUSTER1} -n bookinfo-frontends delete externalservice httpbin +kubectl --context ${CLUSTER1} -n istio-gateways delete accesspolicy allow-get-httpbin diff --git a/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/scripts/configure-domain-rewrite.sh b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/scripts/configure-domain-rewrite.sh index be6dbd6d8b..d6e684c9da 100755 --- a/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/scripts/configure-domain-rewrite.sh +++ b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/scripts/configure-domain-rewrite.sh @@ -90,4 +90,4 @@ done # If the loop exits, it means the check failed consistently for 1 minute echo "DNS rewrite rule verification failed." -exit 1 +exit 1 \ No newline at end of file diff --git a/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/scripts/register-domain.sh b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/scripts/register-domain.sh index f9084487e8..1cb84cd86a 100755 --- a/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/scripts/register-domain.sh +++ b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/scripts/register-domain.sh @@ -14,7 +14,9 @@ hosts_file="/etc/hosts" # Function to check if the input is a valid IP address is_ip() { if [[ $1 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then - return 0 # 0 = true + return 0 # 0 = true - valid IPv4 address + elif [[ $1 =~ ^[0-9a-f]+[:]+[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9a-f]*[:]*[0-9]*$ ]]; then + return 0 # 0 = true - valid IPv6 address else return 1 # 1 = false fi @@ -38,14 +40,15 @@ else fi # Check if the entry already exists -if grep -q "$hostname" "$hosts_file"; then +if grep -q "$hostname\$" "$hosts_file"; then # Update the existing entry with the new IP tempfile=$(mktemp) - sed "s/^.*$hostname/$new_ip $hostname/" "$hosts_file" > "$tempfile" + sed "s/^.*$hostname\$/$new_ip $hostname/" "$hosts_file" > "$tempfile" sudo cp "$tempfile" "$hosts_file" + rm "$tempfile" echo "Updated $hostname in $hosts_file with new IP: $new_ip" else # Add a new entry if it doesn't exist echo "$new_ip $hostname" | sudo tee -a "$hosts_file" > /dev/null echo "Added $hostname to $hosts_file with IP: $new_ip" -fi \ No newline at end of file +fi diff --git a/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/chai-exec.js b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/chai-exec.js index 67ba62f095..020262437f 100644 --- a/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/chai-exec.js +++ b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/chai-exec.js @@ -139,7 +139,11 @@ global = { }, k8sObjectIsPresent: ({ context, namespace, k8sType, k8sObj }) => { - let command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + // covers both namespace scoped and cluster scoped objects + let command = "kubectl --context " + context + " get " + k8sType + " " + k8sObj + " -o name"; + if (namespace) { + command = "kubectl --context " + context + " -n " + namespace + " get " + k8sType + " " + k8sObj + " -o name"; + } debugLog(`Executing command: ${command}`); let cli = chaiExec(command); @@ -176,7 +180,6 @@ global = { debugLog(`Command output (stdout): ${cli.stdout}`); return cli.stdout; }, - curlInPod: ({ curlCommand, podName, namespace }) => { debugLog(`Executing curl command: ${curlCommand} on pod: ${podName} in namespace: ${namespace}`); const cli = chaiExec(curlCommand); diff --git a/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/chai-http.js b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/chai-http.js index 67f43db003..92bf579690 100644 --- a/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/chai-http.js +++ b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/chai-http.js @@ -25,7 +25,30 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); + }); + }, + + checkURLWithIP: ({ ip, host, protocol = "http", path = "", headers = [], certFile = '', keyFile = '', retCode }) => { + debugLog(`Checking URL with IP: ${ip}, Host: ${host}, Path: ${path} with expected return code: ${retCode}`); + + let cert = certFile ? fs.readFileSync(certFile) : ''; + let key = keyFile ? fs.readFileSync(keyFile) : ''; + + let url = `${protocol}://${ip}`; + + // Use chai-http to make a request to the IP address, but set the Host header + let request = chai.request(url).head(path).redirects(0).cert(cert).key(key).set('Host', host); + + debugLog(`Setting headers: ${JSON.stringify(headers)}`); + headers.forEach(header => request.set(header.key, header.value)); + + return request + .send() + .then(async function (res) { + debugLog(`Response status code: ${res.status}`); + debugLog(`Response ${JSON.stringify(res)}`); + expect(res).to.have.property('status', retCode); }); }, @@ -124,7 +147,7 @@ global = { .send() .then(async function (res) { debugLog(`Response status code: ${res.status}`); - expect(res).to.have.status(retCode); + expect(res).to.have.property('status', retCode); }); } }; diff --git a/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/proxies-changes.test.js.liquid b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/proxies-changes.test.js.liquid new file mode 100644 index 0000000000..1934ea13b6 --- /dev/null +++ b/gloo-mesh/enterprise/2-6/mgmt-as-workload/default/tests/proxies-changes.test.js.liquid @@ -0,0 +1,58 @@ +{%- assign version_1_18_or_after = "1.18.0" | minimumGlooGatewayVersion %} +const { execSync } = require('child_process'); +const { expect } = require('chai'); +const { diff } = require('jest-diff'); + +function delay(ms) { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +describe('Gloo snapshot stability test', function() { + let contextName = process.env.{{ context | default: "CLUSTER1" }}; + let delaySeconds = {{ delay | default: 5 }}; + + let firstSnapshot; + + it('should retrieve initial snapshot', function() { + const output = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + + try { + firstSnapshot = JSON.parse(output); + } catch (err) { + throw new Error('Failed to parse JSON output from initial snapshot: ' + err.message); + } + expect(firstSnapshot).to.be.an('object'); + }); + + it('should not change after the given delay', async function() { + await delay(delaySeconds * 1000); + + let secondSnapshot; + try { + const output2 = execSync( + `kubectl --context ${contextName} -n gloo-system exec deploy/gloo -- wget -O - localhost:{% if version_1_18_or_after %}9095{% else %}9091{% endif %}/snapshots/proxies -q`, + { encoding: 'utf8' } + ); + secondSnapshot = JSON.parse(output2); + } catch (err) { + throw new Error('Failed to retrieve or parse the second snapshot: ' + err.message); + } + + const firstJson = JSON.stringify(firstSnapshot, null, 2); + const secondJson = JSON.stringify(secondSnapshot, null, 2); + + // Show only 2 lines of context around each change + const diffOutput = diff(firstJson, secondJson, { contextLines: 2, expand: false }); + + if (! diffOutput.includes("Compared values have no visual difference.")) { + console.error('Differences found between snapshots:\n' + diffOutput); + throw new Error('Snapshots differ after the delay.'); + } else { + console.log('No differences found. The snapshots are stable.'); + } + }); +}); + diff --git a/gloo-mesh/enterprise/2-6/openshift/default/README.md b/gloo-mesh/enterprise/2-6/openshift/default/README.md index e36203cc17..2cfcd04be6 100644 --- a/gloo-mesh/enterprise/2-6/openshift/default/README.md +++ b/gloo-mesh/enterprise/2-6/openshift/default/README.md @@ -9,29 +9,28 @@ source ./scripts/assert.sh -#
Gloo Mesh Enterprise (2.6.6)
+#
Gloo Mesh Enterprise (2.6.7)
## Table of Contents * [Introduction](#introduction) * [Lab 1 - Deploy the Kubernetes clusters manually](#lab-1---deploy-the-kubernetes-clusters-manually-) -* [Lab 2 - Deploy KinD clusters](#lab-2---deploy-kind-clusters-) -* [Lab 3 - Deploy and register Gloo Mesh](#lab-3---deploy-and-register-gloo-mesh-) -* [Lab 4 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-4---deploy-istio-using-gloo-mesh-lifecycle-manager-) -* [Lab 5 - Deploy the Bookinfo demo app](#lab-5---deploy-the-bookinfo-demo-app-) -* [Lab 6 - Deploy the httpbin demo app](#lab-6---deploy-the-httpbin-demo-app-) -* [Lab 7 - Deploy Gloo Mesh Addons](#lab-7---deploy-gloo-mesh-addons-) -* [Lab 8 - Create the gateways workspace](#lab-8---create-the-gateways-workspace-) -* [Lab 9 - Create the bookinfo workspace](#lab-9---create-the-bookinfo-workspace-) -* [Lab 10 - Expose the productpage through a gateway](#lab-10---expose-the-productpage-through-a-gateway-) -* [Lab 11 - Traffic policies](#lab-11---traffic-policies-) -* [Lab 12 - Create the Root Trust Policy](#lab-12---create-the-root-trust-policy-) -* [Lab 13 - Leverage Virtual Destinations for east west communications](#lab-13---leverage-virtual-destinations-for-east-west-communications-) -* [Lab 14 - Zero trust](#lab-14---zero-trust-) -* [Lab 15 - See how Gloo Platform can help with observability](#lab-15---see-how-gloo-platform-can-help-with-observability-) -* [Lab 16 - VM integration with Spire](#lab-16---vm-integration-with-spire-) -* [Lab 17 - Securing the egress traffic](#lab-17---securing-the-egress-traffic-) +* [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) +* [Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager](#lab-3---deploy-istio-using-gloo-mesh-lifecycle-manager-) +* [Lab 4 - Deploy the Bookinfo demo app](#lab-4---deploy-the-bookinfo-demo-app-) +* [Lab 5 - Deploy the httpbin demo app](#lab-5---deploy-the-httpbin-demo-app-) +* [Lab 6 - Deploy Gloo Mesh Addons](#lab-6---deploy-gloo-mesh-addons-) +* [Lab 7 - Create the gateways workspace](#lab-7---create-the-gateways-workspace-) +* [Lab 8 - Create the bookinfo workspace](#lab-8---create-the-bookinfo-workspace-) +* [Lab 9 - Expose the productpage through a gateway](#lab-9---expose-the-productpage-through-a-gateway-) +* [Lab 10 - Traffic policies](#lab-10---traffic-policies-) +* [Lab 11 - Create the Root Trust Policy](#lab-11---create-the-root-trust-policy-) +* [Lab 12 - Leverage Virtual Destinations for east west communications](#lab-12---leverage-virtual-destinations-for-east-west-communications-) +* [Lab 13 - Zero trust](#lab-13---zero-trust-) +* [Lab 14 - See how Gloo Platform can help with observability](#lab-14---see-how-gloo-platform-can-help-with-observability-) +* [Lab 15 - VM integration with Spire](#lab-15---vm-integration-with-spire-) +* [Lab 16 - Securing the egress traffic](#lab-16---securing-the-egress-traffic-) @@ -105,96 +104,14 @@ kubectl config use-context ${MGMT} -## Lab 2 - Deploy KinD clusters - - -Clone this repository and go to the directory where this `README.md` file is. - -Set the context environment variables: - -```bash -export MGMT=mgmt -export CLUSTER1=cluster1 -export CLUSTER2=cluster2 -``` - -Run the following commands to deploy three Kubernetes clusters using [Kind](https://kind.sigs.k8s.io/): - -```bash -./scripts/deploy-aws-with-calico.sh 1 mgmt -./scripts/deploy-aws-with-calico.sh 2 cluster1 us-west us-west-1 -./scripts/deploy-aws-with-calico.sh 3 cluster2 us-west us-west-2 -``` - -Then run the following commands to wait for all the Pods to be ready: - -```bash -./scripts/check.sh mgmt -./scripts/check.sh cluster1 -./scripts/check.sh cluster2 -``` - -**Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. - -Once the `check.sh` script completes, when you execute the `kubectl get pods -A` command, you should see the following: - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m -kube-system calico-node-przxs 1/1 Running 0 4h26m -kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m -kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m -kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m -kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m -kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m -kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m -local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m -metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m -metallb-system speaker-d7jkp 1/1 Running 0 4h26m -``` - -**Note:** The CNI pods might be different, depending on which CNI you have deployed. - -You can see that your currently connected to this cluster by executing the `kubectl config get-contexts` command: - -``` -CURRENT NAME CLUSTER AUTHINFO NAMESPACE - cluster1 kind-cluster1 cluster1 -* cluster2 kind-cluster2 cluster2 - mgmt kind-mgmt kind-mgmt -``` - -Run the following command to make `mgmt` the current cluster. - -```bash -kubectl config use-context ${MGMT} -``` - - - - -## Lab 3 - Deploy and register Gloo Mesh +## Lab 2 - Deploy and register Gloo Mesh [VIDEO LINK](https://youtu.be/djfFiepK4GY "Video Link") Before we get started, let's install the `meshctl` CLI: ```bash -export GLOO_MESH_VERSION=v2.6.6 +export GLOO_MESH_VERSION=v2.6.7 curl -sL https://run.solo.io/meshctl/install | sh - export PATH=$HOME/.gloo-mesh/bin:$PATH ``` @@ -227,6 +144,7 @@ EOF echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/environment-variables.test.js.liquid" timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } --> + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -238,13 +156,13 @@ helm upgrade --install gloo-platform-crds gloo-platform-crds \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 + --version 2.6.7 helm upgrade --install gloo-platform gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh \ --kube-context ${MGMT} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< + +## Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") We are going to deploy Istio using Gloo Mesh Lifecycle Manager. @@ -1167,7 +1086,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 5 - Deploy the Bookinfo demo app +## Lab 4 - Deploy the Bookinfo demo app [VIDEO LINK](https://youtu.be/nzYcrjalY5A "Video Link") We're going to deploy the bookinfo application to demonstrate several features of Gloo Mesh. @@ -1327,7 +1246,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 6 - Deploy the httpbin demo app +## Lab 5 - Deploy the httpbin demo app [VIDEO LINK](https://youtu.be/w1xB-o_gHs0 "Video Link") @@ -1517,7 +1436,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 7 - Deploy Gloo Mesh Addons +## Lab 6 - Deploy Gloo Mesh Addons [VIDEO LINK](https://youtu.be/_rorug_2bk8 "Video Link") To use the Gloo Mesh Gateway advanced features (external authentication, rate limiting, ...), you need to install the Gloo Mesh addons. @@ -1554,7 +1473,7 @@ helm upgrade --install gloo-platform gloo-platform \ --repo https://storage.googleapis.com/gloo-platform/helm-charts \ --namespace gloo-mesh-addons \ --kube-context ${CLUSTER1} \ - --version 2.6.6 \ + --version 2.6.7 \ -f -< +## Lab 7 - Create the gateways workspace [VIDEO LINK](https://youtu.be/QeVBH0eswWw "Video Link") We're going to create a workspace for the team in charge of the Gateways. @@ -1760,7 +1679,7 @@ The Gateway team has decided to import the following from the workspaces that ha -## Lab 9 - Create the bookinfo workspace +## Lab 8 - Create the bookinfo workspace We're going to create a workspace for the team in charge of the Bookinfo application. @@ -1835,7 +1754,7 @@ This is how the environment looks like with the workspaces: -## Lab 10 - Expose the productpage through a gateway +## Lab 9 - Expose the productpage through a gateway [VIDEO LINK](https://youtu.be/emyIu99AOOA "Video Link") In this step, we're going to expose the `productpage` service through the Ingress Gateway using Gloo Mesh. @@ -2104,7 +2023,7 @@ This diagram shows the flow of the request (through the Istio Ingress Gateway): -## Lab 11 - Traffic policies +## Lab 10 - Traffic policies [VIDEO LINK](https://youtu.be/ZBdt8WA0U64 "Video Link") We're going to use Gloo Mesh policies to inject faults and configure timeouts. @@ -2282,7 +2201,7 @@ kubectl --context ${CLUSTER1} -n bookinfo-frontends delete routetable reviews -## Lab 12 - Create the Root Trust Policy +## Lab 11 - Create the Root Trust Policy [VIDEO LINK](https://youtu.be/-A2U2fYYgrU "Video Link") To allow secured (end-to-end mTLS) cross cluster communications, we need to make sure the certificates issued by the Istio control plane on each cluster are signed with intermediate certificates which have a common root CA. @@ -2418,7 +2337,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || -## Lab 13 - Leverage Virtual Destinations for east west communications +## Lab 12 - Leverage Virtual Destinations for east west communications We can create a Virtual Destination which will be composed of the `reviews` services running in both clusters. @@ -2677,7 +2596,7 @@ kubectl --context ${CLUSTER1} -n bookinfo-backends delete outlierdetectionpolicy -## Lab 14 - Zero trust +## Lab 13 - Zero trust [VIDEO LINK](https://youtu.be/BiaBlUaplEs "Video Link") In the previous step, we federated multiple meshes and established a shared root CA for a shared identity domain. @@ -3033,7 +2952,7 @@ kubectl --context ${CLUSTER1} delete accesspolicies -n bookinfo-frontends --all -## Lab 15 - See how Gloo Platform can help with observability +## Lab 14 - See how Gloo Platform can help with observability [VIDEO LINK](https://youtu.be/UhWsk4YnOy0 "Video Link") # Observability with Gloo Platform @@ -3124,7 +3043,7 @@ helm upgrade --install gloo-platform gloo-platform \ --namespace gloo-mesh \ --kube-context ${CLUSTER1} \ --reuse-values \ - --version 2.6.6 \ + --version 2.6.7 \ --values - < +## Lab 15 - VM integration with Spire Let's see how we can configure a VM to be part of the Mesh. @@ -3280,7 +3199,7 @@ helm upgrade --install gloo-platform-crds gloo-platform-crds \ --namespace gloo-mesh \ --kube-context ${MGMT} \ --set featureGates.ExternalWorkloads=true \ - --version 2.6.6 \ + --version 2.6.7 \ --reuse-values \ -f -< ```shell -export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.6.6/gloo-workload-agent.deb +export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.6.7/gloo-workload-agent.deb export ISTIO_URL=https://storage.googleapis.com/solo-workshops/istio-binaries/1.23.1/istio-sidecar.deb docker exec vm1 meshctl ew onboard --install \ --attestor token \ @@ -3571,7 +3490,7 @@ docker exec vm1 meshctl ew onboard --install \ --ext-workload virtualmachines/${VM_APP} ``` + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -310,7 +295,7 @@ redis: telemetryGateway: enabled: true image: - repository: ${registry}/gloo-mesh/gloo-otel-collector + repository: ${registry}/${otel_collector_image} service: type: LoadBalancer glooUi: @@ -327,7 +312,7 @@ glooUi: registry: ${registry}/gloo-mesh telemetryCollector: image: - repository: ${registry}/gloo-mesh/gloo-otel-collector + repository: ${registry}/${otel_collector_image} enabled: true config: exporters: @@ -467,7 +452,7 @@ glooAgent: registry: ${registry}/gloo-mesh telemetryCollector: image: - repository: ${registry}/gloo-mesh/gloo-otel-collector + repository: ${registry}/${otel_collector_image} enabled: true config: exporters: @@ -524,7 +509,7 @@ glooAgent: registry: ${registry}/gloo-mesh telemetryCollector: image: - repository: ${registry}/gloo-mesh/gloo-otel-collector + repository: ${registry}/${otel_collector_image} enabled: true config: exporters: @@ -591,6 +576,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 4 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") @@ -826,7 +812,7 @@ spec: istioOperatorSpec: profile: minimal hub: ${registry}/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless namespace: istio-system values: global: @@ -877,7 +863,7 @@ spec: istioOperatorSpec: profile: empty hub: ${registry}/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -904,7 +890,7 @@ spec: istioOperatorSpec: profile: empty hub: ${registry}/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -940,7 +926,7 @@ spec: istioOperatorSpec: profile: minimal hub: ${registry}/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless namespace: istio-system values: global: @@ -991,7 +977,7 @@ spec: istioOperatorSpec: profile: empty hub: ${registry}/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -1018,7 +1004,7 @@ spec: istioOperatorSpec: profile: empty hub: ${registry}/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -3536,7 +3522,7 @@ echo ```shell export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.7.0-beta1/gloo-workload-agent.deb -export ISTIO_URL=https://storage.googleapis.com/solo-workshops/istio-binaries/1.23.1/istio-sidecar.deb +export ISTIO_URL=https://storage.googleapis.com/solo-workshops/istio-binaries/1.24.1-patch1/istio-sidecar.deb docker exec vm1 meshctl ew onboard --install \ --attestor token \ --join-token ${JOIN_TOKEN} \ @@ -3553,13 +3539,15 @@ docker exec vm1 meshctl ew onboard --install \ ``` + Run the following commands to deploy the Gloo Mesh management plane: ```bash @@ -485,6 +468,7 @@ timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || + ## Lab 3 - Deploy Istio using Gloo Mesh Lifecycle Manager [VIDEO LINK](https://youtu.be/f76-KOEjqHs "Video Link") @@ -719,7 +703,7 @@ spec: istioOperatorSpec: profile: minimal hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless namespace: istio-system values: global: @@ -769,7 +753,7 @@ spec: istioOperatorSpec: profile: empty hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -796,7 +780,7 @@ spec: istioOperatorSpec: profile: empty hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -832,7 +816,7 @@ spec: istioOperatorSpec: profile: minimal hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless namespace: istio-system values: global: @@ -882,7 +866,7 @@ spec: istioOperatorSpec: profile: empty hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -909,7 +893,7 @@ spec: istioOperatorSpec: profile: empty hub: us-docker.pkg.dev/gloo-mesh/istio-workshops - tag: 1.23.1-solo + tag: 1.24.1-patch1-solo-distroless values: gateways: istio-ingressgateway: @@ -3388,7 +3372,7 @@ echo ```shell export GLOO_AGENT_URL=https://storage.googleapis.com/gloo-platform/vm/v2.7.0-beta1/gloo-workload-agent.deb -export ISTIO_URL=https://storage.googleapis.com/solo-workshops/istio-binaries/1.23.1/istio-sidecar.deb +export ISTIO_URL=https://storage.googleapis.com/solo-workshops/istio-binaries/1.24.1-patch1/istio-sidecar.deb docker exec vm1 meshctl ew onboard --install \ --attestor token \ --join-token ${JOIN_TOKEN} \ @@ -3405,13 +3389,15 @@ docker exec vm1 meshctl ew onboard --install \ ``` + + + +
+ +
+ +#
Gloo Mesh Enterprise (2.7.0-beta1-2024-11-18-gg-config-distribution-07bf4f3f85)
+ + + +## Table of Contents +* [Introduction](#introduction) +* [Lab 1 - Deploy KinD Cluster(s)](#lab-1---deploy-kind-cluster(s)-) +* [Lab 2 - Deploy and register Gloo Mesh](#lab-2---deploy-and-register-gloo-mesh-) +* [Lab 3 - Deploy Httpbin to cluster1](#lab-3---deploy-httpbin-to-cluster1-) +* [Lab 4 - Deploy Httpbin to cluster2](#lab-4---deploy-httpbin-to-cluster2-) +* [Lab 5 - Deploy Gloo Gateway to cluster1](#lab-5---deploy-gloo-gateway-to-cluster1-) +* [Lab 6 - Deploy Gloo Gateway to cluster2](#lab-6---deploy-gloo-gateway-to-cluster2-) +* [Lab 7 - Distributed configs](#lab-7---distributed-configs-) + + + +## Introduction + +Gloo Mesh Enterprise is a distribution of the [Istio](https://istio.io/) service mesh that is hardened for production support across multicluster hybrid clusters and service meshes. +With Gloo Mesh Enterprise, you get an extensible, open-source based set of API tools to connect and manage your services across multiple clusters and service meshes. +It includes n-4 Istio version support with security patches to address Common Vulnerabilities and Exposures (CVEs), as well as special builds to meet regulatory standards such as Federal Information Processing Standards (FIPS). + +The Gloo Mesh API simplifies the complexity of your service mesh by installing custom resource definitions (CRDs) that you configure. +Then, Gloo Mesh translates these CRDs into Istio resources across your environment, and provides visibility across all of the resources and traffic. +Enterprise features include multitenancy, global failover and routing, observability, and east-west rate limiting and policy enforcement through authorization and authentication plug-ins. + +### Gloo Mesh Enterprise overview + +Gloo Mesh Enterprise provides many unique features, including: + +* Upstream-first approach to feature development +* Installation, upgrade, and management across clusters and service meshes +* Advanced features for security, traffic routing, tranformations, observability, and more +* End-to-end Istio support and CVE security patching for n-4 versions +* Specialty builds for distroless and FIPS compliance +* 24x7 production support and one-hour Severity 1 SLA +* Portal modules to extend functionality +* Workspaces for simplified multi-tenancy +* Zero-trust architecture for both north-south ingress and east-west service traffic +* Single pane of glass for operational management of Istio, including global observability + +Gloo Mesh Enterprise graph + +### Want to learn more about Gloo Mesh Enterprise? + +You can find more information about Gloo Mesh Enterprise in the official documentation: + + + + +## Lab 1 - Deploy KinD Cluster(s) + + +Clone this repository and go to the directory where this `README.md` file is. + +Set the context environment variables: + +```bash +export MGMT=mgmt +export CLUSTER1=cluster1 +export CLUSTER2=cluster2 +``` + +Deploy the KinD clusters: + +```bash +bash ./data/steps/deploy-kind-clusters/deploy-mgmt.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster1.sh +bash ./data/steps/deploy-kind-clusters/deploy-cluster2.sh +``` +Then run the following commands to wait for all the Pods to be ready: + +```bash +./scripts/check.sh mgmt +./scripts/check.sh cluster1 +./scripts/check.sh cluster2 +``` + +**Note:** If you run the `check.sh` script immediately after the `deploy.sh` script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again. + +Once the `check.sh` script completes, execute the `kubectl get pods -A` command, and verify that all pods are in a running state. + + + + + +## Lab 2 - Deploy and register Gloo Mesh +[VIDEO LINK](https://youtu.be/djfFiepK4GY "Video Link") + + +Before we get started, let's install the `meshctl` CLI: + +```bash +export GLOO_MESH_VERSION=v2.7.0-beta1-2024-11-18-gg-config-distribution-07bf4f3f85 +mkdir -p $HOME/.gloo-mesh/bin +curl "https://storage.googleapis.com/gloo-platform-dev/meshctl/$GLOO_MESH_VERSION/meshctl-$(uname | tr '[:upper:]' '[:lower:]')-amd64" > $HOME/.gloo-mesh/bin/meshctl +chmod +x $HOME/.gloo-mesh/bin/meshctl +export PATH=$HOME/.gloo-mesh/bin:$PATH +``` + + +Install the Kubernetes Gateway and the Gloo CRDs in the management plane. + +```bash +kubectl --context ${MGMT} apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml + +helm upgrade -i -n gloo-system \ +--repo https://storage.googleapis.com/solo-public-helm \ + gloo-gateway gloo/gloo \ + --create-namespace \ + --version 1.17.16 \ + --kube-context $CLUSTER1 \ + -f -< + +Then, you need to set the environment variable to tell the Gloo Mesh agents how to communicate with the management plane: + + + +```bash +export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900 +export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH%:*}) +export ENDPOINT_TELEMETRY_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317 +export ENDPOINT_GLOO_MESH_UI=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-ui -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8090 +``` + +Check that the variables have correct values: +``` +echo $HOST_GLOO_MESH +echo $ENDPOINT_GLOO_MESH +``` + + +Finally, you need to register the cluster(s). + + +Here is how you register the first one: + +```bash +kubectl apply --context ${MGMT} -f - < ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform-dev/platform-charts/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.7.0-beta1-2024-11-18-gg-config-distribution-07bf4f3f85 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform-dev/platform-charts/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER1} \ + --version 2.7.0-beta1-2024-11-18-gg-config-distribution-07bf4f3f85 \ + -f -< ca.crt +kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt +rm ca.crt + +kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token +kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token +rm token + +helm upgrade --install gloo-platform-crds gloo-platform-crds \ + --repo https://storage.googleapis.com/gloo-platform-dev/platform-charts/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.7.0-beta1-2024-11-18-gg-config-distribution-07bf4f3f85 + +helm upgrade --install gloo-platform gloo-platform \ + --repo https://storage.googleapis.com/gloo-platform-dev/platform-charts/helm-charts \ + --namespace gloo-mesh \ + --kube-context ${CLUSTER2} \ + --version 2.7.0-beta1-2024-11-18-gg-config-distribution-07bf4f3f85 \ + -f -< ./test.js +var chai = require('chai'); +var expect = chai.expect; +const helpers = require('./tests/chai-exec'); +describe("Cluster registration", () => { + it("cluster1 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster1"); + }); + it("cluster2 is registered", () => { + podName = helpers.getOutputForCommand({ command: "kubectl -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}' --context " + process.env.MGMT }).replaceAll("'", ""); + command = helpers.getOutputForCommand({ command: "kubectl --context " + process.env.MGMT + " -n gloo-mesh debug -q -i " + podName + " --image=curlimages/curl -- curl -s http://localhost:9091/metrics" }).replaceAll("'", ""); + expect(command).to.contain("cluster2"); + }); +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/deploy-and-register-gloo-mesh/tests/cluster-registration.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + + + + +## Lab 3 - Deploy Httpbin to cluster1 + + +We're going to deploy the httpbin application to demonstrate several features of Gloo Gateway. + +You can find more information about this application [here](http://httpbin.org/). + +Run the following commands to deploy the httpbin app twice (`httpbin1` and `httpbin2`). + +```bash +kubectl --context ${CLUSTER1} create ns httpbin +kubectl apply --context ${CLUSTER1} -f - < +```shell +kubectl --context ${CLUSTER1} -n httpbin get pods +``` + +Here is the expected output when both Pods are ready: + +```,nocopy +NAME READY STATUS RESTARTS AGE +httpbin1-7fdbf6498-ms7qt 1/1 Running 0 94s +httpbin2-655777b846-6nrms 1/1 Running 0 93s +``` + + + + + + +## Lab 4 - Deploy Httpbin to cluster2 + + +We're going to deploy the httpbin application to demonstrate several features of Gloo Gateway. + +You can find more information about this application [here](http://httpbin.org/). + +Run the following commands to deploy the httpbin app twice (`httpbin1` and `httpbin2`). + +```bash +kubectl --context ${CLUSTER2} create ns httpbin +kubectl apply --context ${CLUSTER2} -f - < +```shell +kubectl --context ${CLUSTER2} -n httpbin get pods +``` + +Here is the expected output when both Pods are ready: + +```,nocopy +NAME READY STATUS RESTARTS AGE +httpbin1-7fdbf6498-ms7qt 1/1 Running 0 94s +httpbin2-655777b846-6nrms 1/1 Running 0 93s +``` + + + + + + +## Lab 5 - Deploy Gloo Gateway to cluster1 + +You can deploy Gloo Gateway with the `glooctl` CLI or declaratively using Helm. + +We're going to use the Helm option. + +Install the Kubernetes Gateway API CRDs as they do not come installed by default on most Kubernetes clusters. + +```bash +kubectl --context $CLUSTER1 apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml +``` + +Next, install Gloo Gateway. This command installs the Gloo Gateway control plane into the namespace `gloo-system`. + +```bash +helm repo add solo-public-helm https://storage.googleapis.com/solo-public-helm + +helm repo update + +helm upgrade -i -n gloo-system \ + gloo-gateway solo-public-helm/gloo \ + --create-namespace \ + --version 1.17.16 \ + --kube-context $CLUSTER1 \ + -f -< +```bash +kubectl --context $CLUSTER1 -n gloo-system get pods +``` + +Here is the expected output: + +```,nocopy +NAME READY STATUS RESTARTS AGE +gateway-certgen-h5z9t 0/1 Completed 0 52s +gateway-proxy-7474c7bf9b-dsvtz 3/3 Running 0 47s +gloo-6b5575f9fc-8f2zs 1/1 Running 0 47s +gloo-resource-rollout-check-4bt5g 0/1 Completed 0 47s +gloo-resource-rollout-h5jf4 0/1 Completed 0 47s +``` + + + + + +## Lab 6 - Deploy Gloo Gateway to cluster2 + +You can deploy Gloo Gateway with the `glooctl` CLI or declaratively using Helm. + +We're going to use the Helm option. + +Install the Kubernetes Gateway API CRDs as they do not come installed by default on most Kubernetes clusters. + +```bash +kubectl --context $CLUSTER2 apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml +``` + +Next, install Gloo Gateway. This command installs the Gloo Gateway control plane into the namespace `gloo-system`. + +```bash +helm repo add solo-public-helm https://storage.googleapis.com/solo-public-helm + +helm repo update + +helm upgrade -i -n gloo-system \ + gloo-gateway solo-public-helm/gloo \ + --create-namespace \ + --version 1.17.16 \ + --kube-context $CLUSTER2 \ + -f -< +```bash +kubectl --context $CLUSTER2 -n gloo-system get pods +``` + +Here is the expected output: + +```,nocopy +NAME READY STATUS RESTARTS AGE +gateway-certgen-h5z9t 0/1 Completed 0 52s +gateway-proxy-7474c7bf9b-dsvtz 3/3 Running 0 47s +gloo-6b5575f9fc-8f2zs 1/1 Running 0 47s +gloo-resource-rollout-check-4bt5g 0/1 Completed 0 47s +gloo-resource-rollout-h5jf4 0/1 Completed 0 47s +``` + + + + + +## Lab 7 - Distributed configs + +In this lab, we will explore the concept of distributed configurations in Gloo Mesh. We will demonstrate how Gloo Mesh enables you to manage configurations centrally from the management cluster while distributing them to the Gateways deployed in registered clusters (cluster1 and cluster2 in this case). + +### Prepare Namespaces + +Before we start distributing configuration, let's create a namespace on all three clusters that will contain the centrally-managed gateway resources: + +```bash +kubectl --context $MGMT create ns gloo-gateway-config +kubectl --context $CLUSTER1 create ns gloo-gateway-config +kubectl --context $CLUSTER2 create ns gloo-gateway-config +``` + +Having a dedicated namespace for these resources on workload clusters allows RBAC to be applied to these resources if needed. + +### Deploy a Centrally Managed GatewayClass + +Next, we will deploy a `GatewayClass` named `centrally-managed` in the management cluster. This deployment will automatically create gateways in the workload clusters for any Gateways that use this class. + +```bash +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("Gateway", () => { + it('should be created in cluster1', () => { + helpers.checkDeploymentHasPod({ context: process.env.CLUSTER1, namespace: "gloo-gateway-config", deployment: "gloo-proxy-generic-gateway-gloo-gateway-config" }); + }) + + it('should be created in cluster2', () => { + helpers.checkDeploymentHasPod({ context: process.env.CLUSTER2, namespace: "gloo-gateway-config", deployment: "gloo-proxy-generic-gateway-gloo-gateway-config" }); + }) +}); +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/distributed-configs/tests/check-gateway.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + +Next apply the HTTPRoute: + +```bash +kubectl apply --context ${MGMT} -f - < ./test.js +const helpers = require('./tests/chai-exec'); + +describe("HTTPRoute", () => { + it('should be propagated to cluster1', () => { + return helpers.genericCommand({ + command: `kubectl --context=${process.env.CLUSTER1} get httproutes.gateway.networking.k8s.io -n gloo-gateway-config`, + responseContains: 'httpbin' + }); + }) + + it('should be propagated to cluster2', () => { + return helpers.genericCommand({ + command: `kubectl --context=${process.env.CLUSTER2} get httproutes.gateway.networking.k8s.io -n gloo-gateway-config`, + responseContains: 'httpbin' + }); + }) +}); + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/distributed-configs/tests/verify-routes-created-in-clusters.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + + +### Deploy Child HTTPRoutes + +Now, let's deploy child `HTTPRoute` resources in the `httpbin` namespace on both `cluster1` and `cluster2`. These child routes will define the actual backend service (`httpbin1`) to which traffic will be routed by the parent route. + +```bash +kubectl apply --context ${CLUSTER1} -f - < ./test.js +const httpHelpers = require('./tests/chai-http'); +const execHelpers = require('./tests/chai-exec'); + +describe("httpbin is accessible", () => { + let cluster1 = process.env.CLUSTER1; + let cluster2 = process.env.CLUSTER2; + + let gateway_ip_cluster1 = execHelpers.getOutputForCommand({ command: `kubectl --context ${cluster1} -n gloo-gateway-config get svc gloo-proxy-generic-gateway-gloo-gateway-config -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`}).replaceAll("'", ""); + + let gateway_ip_cluster2 = execHelpers.getOutputForCommand({ command: `kubectl --context ${cluster2} -n gloo-gateway-config get svc gloo-proxy-generic-gateway-gloo-gateway-config -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`}).replaceAll("'", ""); + + it('httpbin is available in cluster1', () => httpHelpers.checkURLWithIP({ ip: gateway_ip_cluster1, host: `httpbin`, path: '/get', retCode: 200 })); + + it('httpbin is available in cluster2', () => httpHelpers.checkURLWithIP({ ip: gateway_ip_cluster2, host: `httpbin`, path: '/get', retCode: 200 })); +}) + +EOF +echo "executing test dist/gloo-mesh-2-0-workshop/build/templates/steps/distributed-configs/tests/check-connectivity-children.test.js.liquid" +timeout --signal=INT 3m mocha ./test.js --timeout 10000 --retries=120 --bail || { DEBUG_MODE=true mocha ./test.js --timeout 120000; exit 1; } +--> + + + + diff --git a/gloo-mesh/enterprise/2-7/distributed-configs/data/.gitkeep b/gloo-mesh/enterprise/2-7/distributed-configs/data/.gitkeep new file mode 100644 index 0000000000..e69de29bb2 diff --git a/gloo-mesh/enterprise/2-7/distributed-configs/data/steps/deploy-kind-clusters/deploy-cluster1.sh b/gloo-mesh/enterprise/2-7/distributed-configs/data/steps/deploy-kind-clusters/deploy-cluster1.sh new file mode 100644 index 0000000000..3fda068282 --- /dev/null +++ b/gloo-mesh/enterprise/2-7/distributed-configs/data/steps/deploy-kind-clusters/deploy-cluster1.sh @@ -0,0 +1,292 @@ +#!/usr/bin/env bash +set -o errexit + +number="2" +name="cluster1" +region="" +zone="" +twodigits=$(printf "%02d\n" $number) + +kindest_node=${KINDEST_NODE} + +if [ -z "$kindest_node" ]; then + export k8s_version="1.28.0" + + [[ ${k8s_version::1} != 'v' ]] && export k8s_version=v${k8s_version} + kindest_node_ver=$(curl --silent "https://registry.hub.docker.com/v2/repositories/kindest/node/tags?page_size=100" \ + | jq -r '.results | .[] | select(.name==env.k8s_version) | .name+"@"+.digest') + + if [ -z "$kindest_node_ver" ]; then + echo "Incorrect Kubernetes version provided: ${k8s_version}." + exit 1 + fi + kindest_node=kindest/node:${kindest_node_ver} +fi +echo "Using KinD image: ${kindest_node}" + +if [ -z "$3" ]; then + case $name in + cluster1) + region=us-west-1 + ;; + cluster2) + region=us-west-2 + ;; + *) + region=us-east-1 + ;; + esac +fi + +if [ -z "$4" ]; then + case $name in + cluster1) + zone=us-west-1a + ;; + cluster2) + zone=us-west-2a + ;; + *) + zone=us-east-1a + ;; + esac +fi + +if hostname -I 2>/dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat </dev/null; then + myip=$(hostname -I | awk '{ print $1 }') +else + myip=$(ipconfig getifaddr en0) +fi + +# Function to determine the next available cluster number +get_next_cluster_number() { + if ! kind get clusters 2>&1 | grep "^kind" > /dev/null; then + echo 1 + else + highest_num=$(kind get clusters | grep "^kind" | tail -1 | cut -c 5-) + echo $((highest_num + 1)) + fi +} + +if [ -f /.dockerenv ]; then +myip=$HOST_IP +container=$(docker inspect $(docker ps -q) | jq -r ".[] | select(.Config.Hostname == \"$HOSTNAME\") | .Name" | cut -d/ -f2) +docker network connect "kind" $container || true +number=$(get_next_cluster_number) +twodigits=$(printf "%02d\n" $number) +fi + +reg_name='kind-registry' +reg_port='5000' +docker start "${reg_name}" 2>/dev/null || \ +docker run -d --restart=always -p "0.0.0.0:${reg_port}:5000" --name "${reg_name}" registry:2 + +cache_port='5000' +cat > registries < ${HOME}/.${cache_name}-config.yml </dev/null || \ +docker run -d --restart=always ${DEPLOY_EXTRA_PARAMS} -v ${HOME}/.${cache_name}-config.yml:/etc/docker/registry/config.yml --name "${cache_name}" registry:2 +done +mkdir -p /tmp/oidc + +cat <<'EOF' >/tmp/oidc/sa-signer-pkcs8.pub +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA53YiBcrn7+ZK0Vb4odeA +1riYdvEb8To4H6/HtF+OKzuCIXFQ+bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL +395nvxdly83SUrdh7ItfOPRluuuiPHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0Zw +zIM9OviX8iEF8xHWUtz4BAMDG8N6+zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm +5X5uOKsCHMtNSjqYUNB1DxN6xxM+odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD8 +2p/16KQKU6TkZSrldkYxiHIPhu+5f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9 +ywIDAQAB +-----END PUBLIC KEY----- +EOF + +cat <<'EOF' >/tmp/oidc/sa-signer.key +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA53YiBcrn7+ZK0Vb4odeA1riYdvEb8To4H6/HtF+OKzuCIXFQ ++bRy7yMrDGITYpfYPrTZOgfdeTLZqOiAj+cL395nvxdly83SUrdh7ItfOPRluuui +PHnFn111wpyjBw5nut4Kx+M5MksNfA1hU0ZwzIM9OviX8iEF8xHWUtz4BAMDG8N6 ++zpLo0pAzaei5hKuLZ9dZOzHBC8VOW82cQMm5X5uOKsCHMtNSjqYUNB1DxN6xxM+ +odGWT/6xthPGk6YCxmO28YHPFZfiS2eAIpD82p/16KQKU6TkZSrldkYxiHIPhu+5 +f9faZJG7dB9pLN1SfdTBio4PK5Mz9muLUCv9ywIDAQABAoIBAB8tro+RMYUDRHjG +el9ypAxIeWEsQVNRQFYkW4ZUiNYSAgl3Ni0svX6xAg989peFVL+9pLVIcfDthJxY +FVlNCjBxyQ/YmwHFC9vQkARJEd6eLUXsj8INtS0ubbp1VxCQRDDL0C/0z7OSoJJh +SwboqjEiTJExA2a+RArmEDTBRzdi3t+kT8G23JcqOivrITt17K6bQYyJXw7/vUdc +r/R+hfd5TqVq92VddzDT7RNJAxsbPPXjGnESlq1GALBDs+uBGYsP0fiEJb2nicSv +z9fBnBeERhut1gcE0C0iLRQZb+3r8TitBtxrZv+0BHgXrkKtXDwWTqGEKOwC4dBn +7nxkH2ECgYEA6+/DOTABGYOWOQftFkJMjcugzDrjoGpuXuVOTb65T+3FHAzU93zy +3bt3wQxrlugluyy9Sc/PL3ck2LgUsPHZ+s7zsdGvvGALBD6bOSSKATz9JgjwifO8 +PgqUz1kXRwez2CtKLOOCFFtcIzEdWIzsa1ubNqLzgN7rD+XBkUc2uEcCgYEA+yTy +72EDMQVoIZOygytHsDNdy0iS2RsBbdurT27wkYuFpFUVWdbNSL+8haE+wJHseHcw +BD4WIMpU+hnS4p4OO8+6V7PiXOS5E/se91EJigZAoixgDUiC8ihojWgK9PYEavUo +hULWbayO59SxYWeUI4Ze0GP8Jw8vdB86ib4ulF0CgYEAgyzRuLjk05+iZODwQyDn +WSquov3W0rh51s7cw0LX2wWSQm8r9NGGYhs5kJ5sLwGxAKj2MNSWF4jBdrCZ6Gr+ +y4BGY0X209/+IAUC3jlfdSLIiF4OBlT6AvB1HfclhvtUVUp0OhLfnpvQ1UwYScRI +KcRLvovIoIzP2g3emfwjAz8CgYEAxUHhOhm1mwRHJNBQTuxok0HVMrze8n1eov39 +0RcvBvJSVp+pdHXdqX1HwqHCmxhCZuAeq8ZkNP8WvZYY6HwCbAIdt5MHgbT4lXQR +f2l8F5gPnhFCpExG5ZLNg/urV3oAQE4stHap21zEpdyOMhZb6Yc5424U+EzaFdgN +b3EcPtUCgYAkKvUlSnBbgiJz1iaN6fuTqH0efavuFGMhjNmG7GtpNXdgyl1OWIuc +Yu+tZtHXtKYf3B99GwPrFzw/7yfDwae5YeWmi2/pFTH96wv3brJBqkAWY8G5Rsmd +qF50p34vIFqUBniNRwSArx8t2dq/CuAMgLAtSjh70Q6ZAnCF85PD8Q== +-----END RSA PRIVATE KEY----- +EOF + +echo Contents of kind${number}.yaml +cat << EOF | tee kind${number}.yaml +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: +- role: control-plane + image: ${kindest_node} + extraPortMappings: + - containerPort: 6443 + hostPort: 70${twodigits} + extraMounts: + - containerPath: /etc/kubernetes/oidc + hostPath: /tmp/oidc + labels: + ingress-ready: true + topology.kubernetes.io/region: ${region} + topology.kubernetes.io/zone: ${zone} +networking: + disableDefaultCNI: true + serviceSubnet: "10.$(echo $twodigits | sed 's/^0*//').0.0/16" + podSubnet: "10.1${twodigits}.0.0/16" +kubeadmConfigPatches: +- | + kind: ClusterConfiguration + apiServer: + extraArgs: + service-account-key-file: /etc/kubernetes/pki/sa.pub + service-account-key-file: /etc/kubernetes/oidc/sa-signer-pkcs8.pub + service-account-signing-key-file: /etc/kubernetes/oidc/sa-signer.key + service-account-issuer: https://solo-workshop-oidc.s3.us-east-1.amazonaws.com + api-audiences: sts.amazonaws.com + extraVolumes: + - name: oidc + hostPath: /etc/kubernetes/oidc + mountPath: /etc/kubernetes/oidc + readOnly: true + metadata: + name: config +containerdConfigPatches: +- |- + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] + endpoint = ["http://${reg_name}:${reg_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] + endpoint = ["http://docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-docker.pkg.dev"] + endpoint = ["http://us-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."us-central1-docker.pkg.dev"] + endpoint = ["http://us-central1-docker:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] + endpoint = ["http://quay:${cache_port}"] + [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] + endpoint = ["http://gcr:${cache_port}"] +EOF +echo ----------------------------------------------------- + +kind create cluster --name kind${number} --config kind${number}.yaml +ipkind=$(docker inspect kind${number}-control-plane | jq -r '.[0].NetworkSettings.Networks[].IPAddress') +networkkind=$(echo ${ipkind} | awk -F. '{ print $1"."$2 }') +kubectl config set-cluster kind-kind${number} --server=https://${myip}:70${twodigits} --insecure-skip-tls-verify=true + +# Preload images +cat << EOF >> images.txt +quay.io/metallb/controller:v0.13.12 +quay.io/metallb/speaker:v0.13.12 +EOF +cat images.txt | while read image; do + docker pull $image || true + kind load docker-image $image --name kind${number} || true +done + +docker network connect "kind" "${reg_name}" || true +docker network connect "kind" docker || true +docker network connect "kind" us-docker || true +docker network connect "kind" us-central1-docker || true +docker network connect "kind" quay || true +docker network connect "kind" gcr || true +# Calico for ipv4 +curl -sL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico.yaml | sed 's/250m/50m/g' | kubectl --context kind-kind${number} apply -f - + +for i in 1 2 3 4 5; do kubectl --context=kind-kind${number} apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml && break || sleep 15; done +kubectl --context=kind-kind${number} create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" +kubectl --context=kind-kind${number} -n metallb-system rollout status deploy controller || true + +cat << EOF | tee metallb${number}.yaml +apiVersion: metallb.io/v1beta1 +kind: IPAddressPool +metadata: + name: first-pool + namespace: metallb-system +spec: + addresses: + - ${networkkind}.1${twodigits}.1-${networkkind}.1${twodigits}.254 +--- +apiVersion: metallb.io/v1beta1 +kind: L2Advertisement +metadata: + name: empty + namespace: metallb-system +EOF + +printf "Create IPAddressPool in kind-kind${number}\n" +for i in {1..10}; do +kubectl --context=kind-kind${number} apply -f metallb${number}.yaml && break +sleep 2 +done + +# connect the registry to the cluster network if not already connected +printf "Renaming context kind-kind${number} to ${name}\n" +for i in {1..100}; do + (kubectl config get-contexts -oname | grep ${name}) && break + kubectl config rename-context kind-kind${number} ${name} && break + printf " $i"/100 + sleep 2 + [ $i -lt 100 ] || exit 1 +done + +# Document the local registry +# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry +cat <