Skip to content

Commit affba38

Browse files
authored
update wva image after repo graduate (llm-d#833)
Signed-off-by: Mohammed Abdi <mohammed.munir.abdi@ibm.com>
1 parent 3ba5600 commit affba38

5 files changed

Lines changed: 7 additions & 7 deletions

File tree

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Image URL to use all building/pushing image targets
22
IMAGE_TAG_BASE ?= ghcr.io/llm-d
33
IMG_TAG ?= latest
4-
IMG ?= $(IMAGE_TAG_BASE)/workload-variant-autoscaler:$(IMG_TAG)
4+
IMG ?= $(IMAGE_TAG_BASE)/llm-d-workload-variant-autoscaler:$(IMG_TAG)
55
KIND_ARGS ?= -t mix -n 3 -g 2 # Default: 3 nodes, 2 GPUs per node, mixed vendors
66
CLUSTER_GPU_TYPE ?= nvidia-mix
77
CLUSTER_NODES ?= 3

deploy/install.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -246,7 +246,7 @@ Examples:
246246
$(basename "$0")
247247
248248
# Deploy with custom WVA image
249-
IMG=<your_registry>/workload-variant-autoscaler:tag $(basename "$0")
249+
IMG=<your_registry>/llm-d-workload-variant-autoscaler:tag $(basename "$0")
250250
251251
# Deploy with custom model and accelerator
252252
$(basename "$0") -m unsloth/Meta-Llama-3.1-8B -a A100

deploy/kind-emulator/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -293,7 +293,7 @@ KIND_IMAGE_PLATFORM=linux/arm64 make deploy-wva-emulated-on-kind CREATE_CLUSTER=
293293
Alternatively, build the image locally and deploy with `IfNotPresent` so the script skips the registry pull and loads your local single-platform image:
294294

295295
```bash
296-
make docker-build IMG=ghcr.io/llm-d/workload-variant-autoscaler:latest
296+
make docker-build IMG=ghcr.io/llm-d/llm-d-workload-variant-autoscaler:latest
297297
WVA_IMAGE_PULL_POLICY=IfNotPresent make deploy-wva-emulated-on-kind CREATE_CLUSTER=true DEPLOY_LLM_D=true
298298
```
299299

deploy/kubernetes/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ make deploy-wva-on-k8s
183183

184184
```bash
185185
export HF_TOKEN="hf_xxxxx"
186-
export IMG="ghcr.io/yourorg/workload-variant-autoscaler:latest"
186+
export IMG="ghcr.io/yourorg/llm-d-workload-variant-autoscaler:latest"
187187
make deploy-wva-on-k8s
188188
```
189189

@@ -657,7 +657,7 @@ kubectl set env deployment/workload-variant-autoscaler-controller-manager \
657657
### Update WVA Image
658658

659659
```bash
660-
export WVA_IMAGE="ghcr.io/yourorg/workload-variant-autoscaler:custom-tag"
660+
export WVA_IMAGE="ghcr.io/yourorg/llm-d-workload-variant-autoscaler:custom-tag"
661661
export DEPLOY_LLM_D=false # Don't redeploy llm-d
662662
export DEPLOY_PROMETHEUS=false # Don't redeploy Prometheus
663663
make deploy-wva-on-k8s

docs/user-guide/installation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Using kustomize for more control:
2828
make install
2929

3030
# Deploy the controller
31-
make deploy IMG=quay.io/llm-d/workload-variant-autoscaler:latest
31+
make deploy IMG=quay.io/llm-d/llm-d-workload-variant-autoscaler:latest
3232
```
3333

3434
### Option 3: Local Development (Kind Emulator):
@@ -44,7 +44,7 @@ Key configuration options:
4444
```yaml
4545
# custom-values.yaml
4646
image:
47-
repository: quay.io/llm-d/workload-variant-autoscaler
47+
repository: quay.io/llm-d/llm-d-workload-variant-autoscaler
4848
tag: latest
4949
pullPolicy: IfNotPresent
5050

0 commit comments

Comments
 (0)