-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Summary
vLLM has released v0.15.1 with critical security fixes for two CVEs. The current pin d7de043d55d1dd629554467e23874097e1c48993 is vulnerable.
Security Issues Fixed
- CVE-2025-69223: Updated aiohttp dependency (#33621)
- CVE-2026-0994: Updated Protobuf dependency (#33619)
Additional Improvements in v0.15.1
Hardware Support
- RTX Blackwell (SM120): Fixed NVFP4 MoE kernel support
- FP8 kernel selection: Fixed CUTLASS group GEMM fallback on SM120
Performance
- torch.compile cold-start: Fixed regression (Llama3-70B: ~88s → ~22s)
- MoE forward pass: Optimized layer name computation caching
Dependencies
- LMCache pinned to >= v0.3.9 (we're at v0.3.13, compatible)
Impact
CRITICAL - Security vulnerabilities in dependencies (aiohttp, Protobuf) require immediate upgrade.
Files Affected
According to docs/upstream-versions.md:
docker/Dockerfile.cudaline 68 (VLLM_COMMIT_SHA)
Recommended Action
-
Update
docker/Dockerfile.cudaline 68:ARG VLLM_COMMIT_SHA=d7de043d55d1dd629554467e23874097e1c48993to:
ARG VLLM_COMMIT_SHA=1892993bc18e243e2c05841314c5e9c06a80c70d(SHA for v0.15.1 tag)
-
Test container build
-
Run E2E tests to ensure compatibility
Upstream References
- Release: https://github.com/vllm-project/vllm/releases/tag/v0.15.1
- Full Changelog: vllm-project/vllm@v0.15.0...v0.15.1
- CVE fixes: #33621, #33619
Generated by Upstream Dependency Monitor
Reactions are currently unavailable