Skip to content

Improvements to the launcher-based tests#319

Merged
MikeSpreitzer merged 1 commit intollm-d-incubation:mainfrom
waltforme:e2e
Mar 6, 2026
Merged

Improvements to the launcher-based tests#319
MikeSpreitzer merged 1 commit intollm-d-incubation:mainfrom
waltforme:e2e

Conversation

@waltforme
Copy link
Copy Markdown
Collaborator

This small PR handles some of the open comments of #264 .

Signed-off-by: Jun Duan <jun.duan.phd@outlook.com>
Copilot AI review requested due to automatic review settings March 6, 2026 13:52
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the launcher-based E2E tests (added in #264) to be less flaky and to align the test object definitions with current launcher/vLLM expectations.

Changes:

  • Increased readiness wait timeouts for requester pods in the “multiple instances” and “switch instances” launcher-based scenarios.
  • Removed per-pod assertions that the requester pod is using the expected dual-pods.llm-d.ai/inference-server-config annotation after patching the ReplicaSet.
  • Removed VLLM_USE_V1 from the InferenceServerConfig env vars created by the launcher-based test object generator.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.

File Description
test/e2e/run-launcher-based.sh Adjusts waits and removes assertions around inference-server-config selection during multi-instance/switch scenarios.
test/e2e/mkobjs.sh Updates the generated InferenceServerConfig objects by removing VLLM_USE_V1 from env vars.

# Wait for requester to be ready (launcher should already be ready)
date
kubectl wait --for condition=Ready pod/$reqlb3 --timeout=30s
kubectl wait --for condition=Ready pod/$reqlb3 --timeout=120s
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test no longer verifies that the newly created requester pod is actually using the patched dual-pods.llm-d.ai/inference-server-config value (isc2). Without an assertion on the pod annotation (or an equivalent check proving a 2nd instance was created), this section can pass even if the ReplicaSet patch didn’t take effect, reducing the test’s ability to catch regressions.

Suggested change
kubectl wait --for condition=Ready pod/$reqlb3 --timeout=120s
kubectl wait --for condition=Ready pod/$reqlb3 --timeout=120s
# Verify requester is using the patched inference server config (isc2)
expect '[ "$(kubectl get pod $reqlb3 -o jsonpath={.metadata.annotations.dual-pods\\.llm-d\\.ai/inference-server-config})" == "$isc2" ]'

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Silly LLM, why didn't you suggest testing with an if statement?

I do not think that this script needs to check whether the ReplicaSet controller behaved properly.

Comment on lines 321 to 324
# Wait for requester to be ready (launcher should already be ready)
date
kubectl wait --for condition=Ready pod/$reqlb4 --timeout=30s
kubectl wait --for condition=Ready pod/$reqlb4 --timeout=120s
kubectl wait --for condition=Ready pod/$launcherlb --timeout=5s
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This scenario patches the ReplicaSet back to the original inference server config, but the test no longer asserts that the new requester pod actually has the expected dual-pods.llm-d.ai/inference-server-config annotation (or otherwise proves the first instance was re-selected). Without that, this can become a false-positive if the patch doesn’t apply or the wrong instance is used.

Copilot uses AI. Check for mistakes.
Comment on lines 18 to 21
env_vars:
VLLM_SERVER_DEV_MODE: "1"
VLLM_USE_V1: "1"
VLLM_LOGGING_LEVEL: "DEBUG"
VLLM_CPU_KVCACHE_SPACE: "1" # GiB
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

VLLM_USE_V1 was removed from the InferenceServerConfig env vars here, but it’s still used in other repo examples (e.g., docs/e2e-recipe.md and .github/workflows/ci-e2e-openshift.yaml). This makes the launcher-based E2E objects diverge from documented/CI configurations and can make failures harder to reproduce. Consider keeping this env var (or updating the docs/workflows and adding a short note explaining why it’s no longer needed).

Copilot uses AI. Check for mistakes.
Comment on lines 38 to 41
env_vars:
VLLM_SERVER_DEV_MODE: "1"
VLLM_USE_V1: "1"
VLLM_LOGGING_LEVEL: "DEBUG"
VLLM_CPU_KVCACHE_SPACE: "1" # GiB
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above: removing VLLM_USE_V1 here diverges from the repo’s documented/CI InferenceServerConfig examples. If the intent is to standardize on a default vLLM mode, consider updating the docs/workflows accordingly so users and CI run the same configuration.

Copilot uses AI. Check for mistakes.
Comment on lines 58 to 61
env_vars:
VLLM_SERVER_DEV_MODE: "1"
VLLM_USE_V1: "1"
VLLM_LOGGING_LEVEL: "DEBUG"
VLLM_CPU_KVCACHE_SPACE: "1" # GiB
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above: VLLM_USE_V1 removal changes the runtime configuration for this test InferenceServerConfig compared to other repo examples. Please either keep it for consistency or update the other references so launcher-based tests match the expected vLLM configuration.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator

@MikeSpreitzer MikeSpreitzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

@MikeSpreitzer MikeSpreitzer merged commit 4ee72f6 into llm-d-incubation:main Mar 6, 2026
34 checks passed
@waltforme waltforme deleted the e2e branch March 6, 2026 17:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants