Skip to content

feat: vLLM latest (0.15.0) production metrics#348

Open
changminbark wants to merge 1 commit intokubernetes-sigs:mainfrom
changminbark:vllm-v1-prod-metrics
Open

feat: vLLM latest (0.15.0) production metrics#348
changminbark wants to merge 1 commit intokubernetes-sigs:mainfrom
changminbark:vllm-v1-prod-metrics

Conversation

@changminbark
Copy link
Contributor

PR Template

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change
/kind bug
/kind cleanup
/kind design
/kind documentation
/kind failing-test

/kind feature

/kind flake

What this PR does / why we need it:
This PRs updates and introduces vLLM latest (v0.15.0) production metrics.

Which issue(s) this PR fixes:

Fixes #323

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

The vLLM production metrics were updated to reflect the latest metrics in vLLM v0.15.0.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


Testing

Testing was done the default config.yml.

Click to expand functional test output
pdm run python3 inference_perf/main.py -c config.yml
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
2026-02-13 10:13:36,033 - inference_perf.config - INFO - Using configuration from: config.yml
2026-02-13 10:13:36,038 - inference_perf.config - INFO - Benchmarking with the following config:

api:
  type: completion
  streaming: true
  headers: null
data:
  type: shareGPT
  path: null
  input_distribution: null
  output_distribution: null
  shared_prefix: null
  trace: null
load:
  type: constant
  interval: 1.0
  stages:
  - !!python/object:inference_perf.config.StandardLoadStage
    __dict__:
      rate: 1.0
      duration: 30
      num_requests: null
      concurrency_level: null
    __pydantic_extra__: null
    __pydantic_fields_set__: !!set
      rate: null
      duration: null
    __pydantic_private__: null
  sweep: null
  num_workers: 16
  worker_max_concurrency: 100
  worker_max_tcp_connections: 2500
  trace: null
  circuit_breakers: []
  request_timeout: null
  lora_traffic_split: null
metrics:
  type: prometheus
  prometheus:
    url: http://localhost:9090
    scrape_interval: 15
report:
  request_lifecycle:
    summary: true
    per_stage: true
    per_request: false
    per_adapter: true
    per_adapter_stage: false
    percentiles:
    - 0.1
    - 1.0
    - 5.0
    - 10.0
    - 25.0
    - 50.0
    - 75.0
    - 90.0
    - 95.0
    - 99.0
    - 99.9
  prometheus:
    summary: true
    per_stage: false
storage:
  local_storage:
    path: reports-20260213-101334
    report_file_prefix: null
  google_cloud_storage: null
  simple_storage_service: null
server:
  type: vllm
  model_name: HuggingFaceTB/SmolLM2-135M-Instruct
  base_url: http://0.0.0.0:8000
  ignore_eos: true
tokenizer:
  pretrained_model_name_or_path: HuggingFaceTB/SmolLM2-135M-Instruct
circuit_breakers: null


2026-02-13 10:13:36,038 - inference_perf.client.filestorage.local - INFO - Report files will be stored at: reports-20260213-101334
2026-02-13 10:14:02,803 - inference_perf.loadgen.load_generator - INFO - Stage 0 - run started
Stage 0 progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.0/1.0 [00:32<00:00, 32.03s/it]
2026-02-13 10:14:34,879 - inference_perf.loadgen.load_generator - INFO - Stage 0 - run completed
2026-02-13 10:14:35,882 - inference_perf.reportgen.base - INFO - Generating Reports...
2026-02-13 10:14:52,956 - inference_perf.client.metricsclient.prometheus_client.base - WARNING - Metric metadata is not present for metric: num_requests_swapped. Skipping this metric.
2026-02-13 10:14:53,040 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20260213-101334/summary_lifecycle_metrics.json
2026-02-13 10:14:53,041 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20260213-101334/stage_0_lifecycle_metrics.json
2026-02-13 10:14:53,041 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20260213-101334/summary_prometheus_metrics.json
2026-02-13 10:14:53,043 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20260213-101334/config.yaml

config.yaml
stage_0_lifecycle_metrics.json
summary_lifecycle_metrics.json
summary_prometheus_metrics.json

Note: The failed inputs in the test are due to the inputs being too large for the model (modified vLLM to take max model len smaller).

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 13, 2026
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: changminbark
Once this PR has been reviewed and has the lgtm label, please assign arangogutierrez for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 13, 2026
@k8s-ci-robot k8s-ci-robot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Feb 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

vllm V1 production metrics

2 participants