% make deploy-wva-emulated-on-kind
>>> Deploying workload-variant-autoscaler (cluster args: -t mix -n 3 -g 2 , image: ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest)
KIND=kind KUBECTL=kubectl IMG=ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest DEPLOY_LLM_D=false ENVIRONMENT=kind-emulator CREATE_CLUSTER=false CLUSTER_GPU_TYPE=nvidia-mix CLUSTER_NODES=3 CLUSTER_GPUS=4 MULTI_MODEL_TESTING= NAMESPACE_SCOPED=false SCALER_BACKEND=prometheus-adapter \
deploy/install.sh
[INFO] Detected IMG environment variable: ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest
[INFO] Starting Workload-Variant-Autoscaler Deployment on kind-emulator
[INFO] ===========================================================
[INFO] Checking prerequisites...
[SUCCESS] All generic prerequisites tools met
[INFO] Setting TLS verification...
[INFO] Emulated environment detected - enabling TLS skip verification for self-signed certificates
[SUCCESS] Successfully set TLS verification to: true
[INFO] Setting WVA logging level...
[INFO] Development environment - using debug logging
[SUCCESS] WVA logging level set to: debug
[INFO] Loading environment-specific functions for kind-emulator...
[INFO] Checking prerequisites...
[SUCCESS] All generic prerequisites tools met
[INFO] Checking Kubernetes-specific prerequisites...
[INFO] Cluster creation skipped (CREATE_CLUSTER=false)
[SUCCESS] Using KIND cluster 'kind-wva-gpu-cluster'
[INFO] Loading WVA image 'ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest' into KIND cluster...
[INFO] Pulling single-platform image for KIND (platform=linux/amd64) to avoid load errors...
latest: Pulling from llm-d/llm-d-workload-variant-autoscaler
a570852fae1a: Pull complete
b4e6f1bfce0a: Pull complete
b4242723c53f: Pull complete
52630fc75a18: Pull complete
d6b1b89eccac: Pull complete
ebddc55facdc: Pull complete
2780920e5dbf: Pull complete
bdfd7f7e5bf6: Pull complete
3214acf345c0: Pull complete
7c12895b777b: Pull complete
dd64bf2dd177: Pull complete
fa8ae93e2b3a: Pull complete
c172f21841df: Pull complete
b839dfae01f6: Pull complete
0218703b6314: Download complete
Digest: sha256:5cae572f2acf18bd84c01f57956db10cb975e39452ebdfbd2ca3b98047a32fa5
Status: Downloaded newer image for ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest
ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest
[SUCCESS] Pulled image 'ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest' (platform=linux/amd64)
Image: "ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest" with ID "sha256:5cae572f2acf18bd84c01f57956db10cb975e39452ebdfbd2ca3b98047a32fa5" not yet present on node "kind-wva-gpu-cluster-control-plane", loading...
Image: "ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest" with ID "sha256:5cae572f2acf18bd84c01f57956db10cb975e39452ebdfbd2ca3b98047a32fa5" not yet present on node "kind-wva-gpu-cluster-worker", loading...
Image: "ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest" with ID "sha256:5cae572f2acf18bd84c01f57956db10cb975e39452ebdfbd2ca3b98047a32fa5" not yet present on node "kind-wva-gpu-cluster-worker2", loading...
ERROR: command "docker save -o /private/var/folders/fp/4p_qynps32sg30hs3m3bs5pc0000gn/T/images-tar2493314935/images.tar ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest" failed with error: exit status 1
Command Output: Error response from daemon: unable to create manifests file: NotFound: content digest sha256:5988b2a57fbf5f12f0294e6ce7ebc5b14424b89d6a1f6d7b8bd5849fc56339c1: not found
make: *** [deploy-wva-emulated-on-kind] Error 1