Skip to content

workload-variant-autoscaler/workload-variant-autoscaler 0.5.0 Public Latest

Helm chart for Workload-Variant-Autoscaler (WVA) - GPU-aware autoscaler for LLM inference workloads

Install from the command line
Learn more about packages
$ docker pull ghcr.io/llm-d-incubation/workload-variant-autoscaler/workload-variant-autoscaler:0.5.0

Recent tagged image versions

  • Published about 2 months ago · Digest
    sha256:5676493d7336755f061ff1207aa5f1e2155b1c10409e6a6e038f340aa068f3c5
    189 Version downloads
  • Published 2 months ago · Digest
    sha256:55e54791d59697b923307718dde79bcb5217a8b47920267167540d3ec0fe23e7
    8 Version downloads
  • Published 3 months ago · Digest
    sha256:6e286b954fa2079e07865fa2f52e6e899d9ae2d887b9c2c49479366df4b4bdfc
    138 Version downloads
  • Published 4 months ago · Digest
    sha256:ba0f9f96bcc5ab192c856a06cbaee7cd32705fff3eecea3a7c0d366fd8afd60a
    58 Version downloads
  • Published 4 months ago · Digest
    sha256:02b2c64eb8f8fb5fe485187e46578a0581ed24d006decb4f4b2ed2f38bef930f
    91 Version downloads

Loading

Last published

2 months ago

Total downloads

715