Skip to content

EAGLE3 automation triage chart for new models#1417

Open
yeyu-nvidia wants to merge 7 commits into
mainfrom
yeyu/eagle3-launcher-examples-new-models
Open

EAGLE3 automation triage chart for new models#1417
yeyu-nvidia wants to merge 7 commits into
mainfrom
yeyu/eagle3-launcher-examples-new-models

Conversation

@yeyu-nvidia
Copy link
Copy Markdown
Contributor

@yeyu-nvidia yeyu-nvidia commented May 8, 2026

Summary

This PR delivers the EAGLE3 automation triage work (OKR-30): testing the 4-step EAGLE3 offline pipeline against 10 new models to reveal failure modes, and providing a living triage document for ongoing tracking.

Contents

New script:

  • tools/launcher/common/eagle3/dump_offline_data_hf.sh — HF-based (device_map=auto) hidden state extraction for models not supported by TRT-LLM (VLMs, custom-code models, architectures absent from TRT-LLM)

12 new hf_offline_eagle3.yaml launcher configs:

Model Size dump script
DeepSeek/DeepSeek-V3.2 685B MoE, 2 nodes TRT-LLM
GLM/GLM-5 744B MoE, 2 nodes bench TRT-LLM
MiniMax/MiniMax-M2.5 230B MoE HF
Mistral/Ministral-3-8B 8B dense HF
Mistral/Ministral-3-14B 14B dense HF
MoonshotAI/Kimi-K2.5 1T MoE, MLA TRT-LLM
NVIDIA/Kimi-K2.5-NVFP4 NVFP4 quant; tasks 1-2 use BF16 TRT-LLM
OpenAI/GPT-OSS-20B 20B dense HF
Qwen/Qwen3.5-9B 9B VLM HF
Qwen/Qwen3.5-27B 27B VLM HF
Qwen/Qwen3.5-35B-A3B 35B MoE HF
StepFun/Step-3.5-Flash 197B MoE, SWA HF

EAGLE3_TRIAGE.md — living triage document with:

  • Mermaid decision-tree covering all failure modes per pipeline step
  • Model test result table (7 models tested on OCI-HSG cluster, 3 pending)
  • 6 documented root-cause issues with fix status (1 already fixed)
  • Update instructions for adding new models as they are tested

Failure modes found (7 models on OCI-HSG, 2026-04-15)

# Issue Affected Status
1 dump_offline_data_vllm.sh missing — universal task_1 failure All 7 Fixed in this PR via _hf.sh
2 offline_training.sh HF Hub upload on local path All 7 Fixed in sandbox MR b838b171
3 Task 0 time limit exceeded 5/7 models Partially addressed (afterany deps)
4 GPT-OSS-20B tiktoken cache not populated GPT-OSS-20B Open
5 trust_remote_code not forwarded to benchmark MiniMax-M2.5 Open
6 DeepSeek-V3.2 task_1 OOM (signal 15) DeepSeek-V3.2 Open

Test plan

  • Dry-run configs: uv run launch.py --yaml examples/<org>/<model>/hf_offline_eagle3.yaml --dryrun --yes -v
  • Verify EAGLE3_TRIAGE.md Mermaid diagram renders in GitHub
  • Update triage table as Kimi-K2.5, GLM-5, Kimi-K2.5-NVFP4 are tested

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added many EAGLE3 offline speculative-decoding pipeline configs for multiple models.
    • Added launchers for dumping hidden states via HuggingFace and vLLM workflows.
    • Added a vLLM-based hidden-states extraction utility.
  • Documentation

    • Added an EAGLE3 triage guide with decision tree, known-issues catalog, and update templates.
  • Bug Fix / Behavior

    • Sandbox pipelines with allow_to_fail now let downstream tasks proceed when earlier steps fail.

Review Change Stack

yeyu-nvidia and others added 3 commits May 7, 2026 11:51
Add hf_offline_eagle3.yaml configs in tools/launcher/examples/ for:
- DeepSeek/DeepSeek-V3.2 (685B MoE, 2 nodes, TP=8)
- GLM/GLM-5 (744B MoE, 2 nodes bench, TP=4/EP=2)
- MiniMax/MiniMax-M2.5 (230B MoE, TP=4/EP=4)
- Mistral/Ministral-3-8B (8B dense, TP=4)
- Mistral/Ministral-3-14B (14B dense, TP=4)
- MoonshotAI/Kimi-K2.5 (1T MoE, TP=4/EP=1)
- NVIDIA/Kimi-K2.5-NVFP4 (NVFP4 quant; tasks 1-2 use BF16 base)
- OpenAI/GPT-OSS-20B (20B dense, TP=4)
- Qwen/Qwen3.5-27B (27B dense VLM, TP=4)
- Qwen/Qwen3.5-35B-A3B (35B MoE, TP=4/EP=4)
- Qwen/Qwen3.5-9B (9B dense VLM, TP=4)
- StepFun/Step-3.5-Flash (197B MoE with SWA, TP=4/EP=4)

Each config follows the standard 4-step EAGLE3 offline pipeline:
query → dump hidden states → train draft head → benchmark.
Uses public slurm_factory and common/ script paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Ye Yu <yeyu@nvidia.com>
Adds tools/launcher/examples/EAGLE3_TRIAGE.md, a living document for
tracking EAGLE3 pipeline failure modes across new models.

Content:
- Mermaid decision-tree diagram mapping each pipeline step (query →
  dump → train → benchmark) to known failure modes and root causes
- Model test result table with pass/fail/timeout status for 7 models
  tested on the OCI-HSG cluster (3 remaining to be tested)
- 6 documented issues with symptoms, affected models, root causes,
  and fix recommendations:
  1. dump_offline_data_vllm.sh missing (universal, all models)
  2. offline_training.sh HF Hub upload bug (universal, all models)
  3. Task 0 time limit exceeded (5 models)
  4. GPT-OSS-20B tiktoken cache missing (model-specific)
  5. trust_remote_code not passed to benchmark (MiniMax-M2.5)
  6. DeepSeek-V3.2 task_1 OOM (model-specific)
- Update instructions for adding new models or failure modes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Ye Yu <yeyu@nvidia.com>
From the nmm-okr30 sandbox MR (b838b171):
- Add common/eagle3/dump_offline_data_hf.sh: HF-based (device_map=auto)
  hidden state extraction for models not supported by TRT-LLM. Handles
  VLMs, custom-code models, and architectures absent from TRT-LLM.
- Update task_1 for 8 models to use dump_offline_data_hf.sh:
  MiniMax-M2.5, Ministral-3-8B, Ministral-3-14B, GPT-OSS-20B,
  Qwen3.5-9B, Qwen3.5-27B, Qwen3.5-35B-A3B, Step-3.5-Flash.
  Models that retain TRT-LLM dump: DeepSeek-V3.2, GLM-5, Kimi-K2.5,
  Kimi-K2.5-NVFP4 (all pure-text MoE with TRT-LLM support).
- Update EAGLE3_TRIAGE.md with actual test results from 7 models run
  on OCI-HSG cluster on 2026-04-15, marking Issue 2 (HF upload bug)
  as FIXED and correcting Issue 1 status (hf script created as fix).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Ye Yu <yeyu@nvidia.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 8, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds HF- and vLLM-based hidden-state dumping launchers and a vLLM Python extractor, many per-model HF pipeline YAMLs chaining data synthesis → hidden-state dump → EAGLE3 draft-head training → benchmark, triage/diagnostics docs, and a scheduler tweak for allow-to-fail pipelines.

Changes

EAGLE3 HuggingFace Offline Pipeline

Layer / File(s) Summary
HuggingFace Dump Script
tools/launcher/common/eagle3/dump_offline_data_hf.sh
New Bash launcher that wraps compute_hidden_states_hf.py, sources service_utils.sh, best-effort installs datasets, maps SLURM array task IDs to --dp-rank/--dp-world-size, enables --trust_remote_code, and forwards CLI args.
vLLM Hidden-State Extractor (Python)
examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py
New Python entrypoint that loads JSONL conversations, tokenizes (removes </think> chat template fragment), shards by DP, builds prompts, runs VllmHiddenStatesGenerator, and writes per-conversation .pt files containing input_ids, hidden_states, aux_hidden_states, and conversation_id.
vLLM Dump Launcher
tools/launcher/common/eagle3/dump_offline_data_vllm.sh
New Bash launcher that sources service_utils.sh, best-effort installs speculators/datasets, maps SLURM array task IDs to DP params, and invokes compute_hidden_states_vllm.py with HF_MODEL_CKPT, DP params, --trust_remote_code, and forwarded args.
Model Configurations
tools/launcher/examples/{DeepSeek,GLM,MiniMax,Mistral,MoonshotAI,NVIDIA,OpenAI,Qwen,StepFun}/.../hf_offline_eagle3.yaml
Many new per-model YAML pipeline definitions (DeepSeek-V3.2, GLM-5, MiniMax-M2.5, Ministral-3-{14B,8B}, Kimi-K2.5 variants, GPT-OSS-20B, Qwen3.5 variants, Step-3.5-Flash). Each declares a global hf_model, and four tasks that share /scratchspace: task_0 (vLLM data synthesis) → task_1 (hidden-state dump) → task_2 (EAGLE3 draft-head training) → task_3 (VLLM speculative-decoding benchmark), with SLURM/container/resource wiring per task.
Triage & Diagnostics
tools/launcher/examples/EAGLE3_TRIAGE.md
New triage document describing the pipeline overview, a Mermaid decision tree for diagnosing failures, a per-model test matrix, a known-issues catalog, and maintenance/update instructions.
Scheduler tweak
tools/launcher/core.py
When a SandboxPipeline sets allow_to_fail=True, set the generated task executor's dependency_type to "afterany" if supported, changing downstream task dependency behavior for pipelines that allow failures.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes


Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (1 error, 1 warning)

Check name Status Explanation Resolution
Security Anti-Patterns ❌ Error Bash scripts hardcode --trust_remote_code flag. Violates requirement that trust_remote_code be caller-configurable (not hardcoded). Remove --trust_remote_code from bash scripts or make it configurable via environment variable, allowing callers to opt-in rather than force-enable.
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'EAGLE3 automation triage chart for new models' directly and specifically describes the main contribution: delivering triage documentation and launcher configurations for testing the EAGLE3 pipeline on 10 new models.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch yeyu/eagle3-launcher-examples-new-models

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@tools/launcher/common/eagle3/dump_offline_data_hf.sh`:
- Around line 50-52: Add an explicit guard that checks the HF_MODEL_CKPT
environment variable is set and non-empty before running the Python command: if
HF_MODEL_CKPT is missing or empty, print a clear error mentioning HF_MODEL_CKPT
and exit with a non-zero status so the subsequent call to
compute_hidden_states_hf.py (the python3 invocation that uses --model
${HF_MODEL_CKPT} and --dp-rank ${TASK_ID}) never runs; keep the check
immediately above the python3 invocation.
- Line 34: The pip installation currently suppresses errors using "pip install
datasets 2>/dev/null || true" which hides failures while the script imports
datasets (at the top) and later calls load_dataset("json", ...); remove the
error suppression so installation failures are visible by replacing that
invocation with a plain failing install (e.g., "pip install datasets") or ensure
the environment pre-installs the dependency; update the launcher script around
the pip install line and any documentation/bootstrap steps accordingly so
ImportError/debug info surfaces immediately.
- Around line 36-48: The script uses unsafe parameter expansion and masks
errors; update checks to use quoted defaults like [ -z
"${SLURM_ARRAY_TASK_ID:-}" ] and [ -z "${SLURM_ARRAY_TASK_COUNT:-}" ] and assign
TASK_ID and TASK_COUNT from the quoted values (e.g.,
TASK_ID="${SLURM_ARRAY_TASK_ID}" ) to avoid "unary operator expected" errors;
forward arguments using "$@" instead of ${@} to preserve multi-word args; stop
silencing pip failures by removing `2>/dev/null || true` from the `pip install
datasets` invocation so installs surface errors; and add a validation check for
HF_MODEL_CKPT (e.g., [ -n "${HF_MODEL_CKPT:-}" ] with an explanatory error and
exit) before it is used.

In `@tools/launcher/examples/DeepSeek/DeepSeek-V3.2/hf_offline_eagle3.yaml`:
- Around line 33-34: The YAML global_vars block that defines hf_model (hf_model:
/hf-local/deepseek-ai/DeepSeek-V3.2) is missing the required environment
variables; set MLM_MODEL_CFG to the HuggingFace repo ID for this model (e.g.,
deepseek-ai/DeepSeek-V3.2 or the canonical repo string) and set QUANT_CFG to the
chosen quantization profile (e.g., NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG)
alongside hf_model in the same global_vars section so the launcher picks up
MLM_MODEL_CFG and QUANT_CFG for this config (apply same change for the other
blocks noted around lines 46-47, 64-66, 104-106).

In `@tools/launcher/examples/EAGLE3_TRIAGE.md`:
- Around line 11-37: Several fenced code blocks in EAGLE3_TRIAGE.md are missing
language tags and required surrounding blank lines; update the fences around the
"Model checkpoint (HuggingFace)" block, the "ValueError: The repository ..."
block, and the "Per-model results template" block to use explicit language
markers (e.g., ```text or ```markdown) and ensure there is a blank line before
and after each fence so markdownlint stops warning; specifically edit the fences
surrounding the literal text "Model checkpoint (HuggingFace)", the error snippet
starting "ValueError: The repository ... contains custom code...", and the
"Per-model results template" section to add the appropriate ```text/```markdown
tags and blank lines.

In `@tools/launcher/examples/GLM/GLM-5/hf_offline_eagle3.yaml`:
- Around line 33-35: The YAML config for the new GLM-5 model sets hf_model but
omits required environment wiring for MLM_MODEL_CFG and QUANT_CFG; update the
global_vars block to include MLM_MODEL_CFG with the HuggingFace repo id
(matching hf_model value) and a default QUANT_CFG (e.g., NVFP4_DEFAULT_CFG or
INT8_DEFAULT_CFG), and ensure each task's environment entries (the task-level
environment maps referenced in this file) also set MLM_MODEL_CFG and QUANT_CFG
so downstream launcher logic receives consistent model metadata.
- Around line 94-103: The task's args block is missing the required
--trust-remote-code flag which prevents models that use custom model/tokenizer
classes from loading; update the args list (the args block shown with
--speculative_algorithm EAGLE3 and --mtbench ...) to include --trust-remote-code
true (or just --trust-remote-code if CLI accepts flag style) so task_3 forwards
trust to remote code when launching the model.

In `@tools/launcher/examples/MiniMax/MiniMax-M2.5/hf_offline_eagle3.yaml`:
- Around line 31-32: The model config under global_vars only sets hf_model; add
the required environment variables MLM_MODEL_CFG and QUANT_CFG alongside
hf_model: set MLM_MODEL_CFG to the HuggingFace repo ID for this model (e.g.,
"MiniMaxAI/MiniMax-M2.5") and set QUANT_CFG to the chosen quantization profile
(e.g., "NVFP4_DEFAULT_CFG" or "INT8_DEFAULT_CFG"); update each affected
global_vars block (the one containing hf_model and the other similar blocks
referenced) so every launcher job has both MLM_MODEL_CFG and QUANT_CFG defined.
- Around line 89-105: task_3 in the YAML (the specdec benchmark using script
common/specdec_bench/quick_check.sh) is missing the required --trust-remote-code
flag for the MiniMax custom architecture; update the task_3 args list to include
- --trust-remote-code alongside the other flags (e.g., with
--speculative_algorithm EAGLE3) so the benchmark runs with trust_remote_code
enabled.

In `@tools/launcher/examples/Mistral/Ministral-3-14B/hf_offline_eagle3.yaml`:
- Around line 28-30: The new model block using global_vars hf_model:
/hf-local/mistralai/Ministral-3-14B-Instruct-2512-BF16 is missing the required
environment metadata; add MLM_MODEL_CFG and QUANT_CFG entries alongside hf_model
so launcher jobs get consistent model-id and quant-config metadata — set
MLM_MODEL_CFG to the HuggingFace repo ID for MinistraI-3-14B (the canonical repo
string) and set QUANT_CFG to the appropriate quantization constant (e.g.,
NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG) in the same global_vars section; apply
the same addition for the other mentioned blocks (lines ~41-43, 59-60, 99-101)
where hf_model entries exist.

In `@tools/launcher/examples/Mistral/Ministral-3-8B/hf_offline_eagle3.yaml`:
- Around line 27-29: The new model YAML sets global_vars.hf_model but is missing
the required environment vars MLM_MODEL_CFG and QUANT_CFG; update the
global_vars block in hf_offline_eagle3.yaml (and the other instances referenced
around lines 40-42, 58-60, 98-100) to add MLM_MODEL_CFG with the HuggingFace
repo ID for the model and QUANT_CFG with the appropriate quantization profile
(e.g., NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG) so both variables are present
alongside hf_model for the new model config.

In `@tools/launcher/examples/MoonshotAI/Kimi-K2.5/hf_offline_eagle3.yaml`:
- Around line 34-35: Add the missing required environment metadata by setting
MLM_MODEL_CFG to the HuggingFace repo ID and QUANT_CFG to the chosen
quantization profile for each model block; specifically update the global_vars
block containing hf_model (/hf-local/moonshotai/Kimi-K2.5) to include
MLM_MODEL_CFG: moonshotai/Kimi-K2.5 and QUANT_CFG: <choose e.g.,
NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG>, and make the same additions to the other
model blocks referenced (around the lines noted: 47-48, 65-66, 105-107) so every
model config includes both MLM_MODEL_CFG and QUANT_CFG keys.

In `@tools/launcher/examples/NVIDIA/Kimi-K2.5-NVFP4/hf_offline_eagle3.yaml`:
- Around line 40-43: global_vars block for this model only defines hf_model and
hf_model_bf16 but is missing the required environment variables; add
MLM_MODEL_CFG and QUANT_CFG entries (e.g., MLM_MODEL_CFG:
"nvidia/Kimi-K2.5-NVFP4" and QUANT_CFG: "NVFP4_DEFAULT_CFG" or the appropriate
quant config) to the model's task environment or shared vars so the launcher
conventions are satisfied; update the same pattern for the other model blocks
referenced (the additional hf_model/hf_model_bf16 groups) by wiring
MLM_MODEL_CFG and QUANT_CFG via task environments or shared/global vars with
interpolation.
- Around line 102-110: The benchmark args block for this Kimi model family must
include the trust flag to allow remote tokenizer code; update the args list (the
same args block used by task_3/benchmark step) to add "--trust-remote-code" (or
"--trust-remote-code true") so the benchmark initialization for the Kimi
tokenizer will not fail; locate the args sequence in the YAML (the block
containing --draft_model_dir, --draft_length, etc.) and append the
--trust-remote-code flag.

In `@tools/launcher/examples/OpenAI/GPT-OSS-20B/hf_offline_eagle3.yaml`:
- Around line 28-30: The YAML is missing required environment variables: set
MLM_MODEL_CFG to the HuggingFace repo ID and QUANT_CFG to the chosen
quantization config (e.g., NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG) alongside the
existing hf_model; add these keys under global_vars (or in the
shared/interpolated env for all tasks) and ensure each task environment block
that references hf_model also includes MLM_MODEL_CFG and QUANT_CFG so the
launcher can pick them up consistently.
- Around line 91-104: task_3's runtime invocation is missing the required
--trust-remote-code flag and the TIKTOKEN_RS_CACHE_DIR environment export;
update the args array (the block containing --speculative_algorithm EAGLE3,
--engine VLLM, etc.) to include --trust-remote-code, and add a
TIKTOKEN_RS_CACHE_DIR entry to the environment list (alongside HF_LOCAL and
HF_MODEL_CKPT) pointing to the intended cache dir (e.g., /tiktoken_rs_cache or
the project's cache path) so the model runs with the same runtime settings as
earlier tasks.

In `@tools/launcher/examples/Qwen/Qwen3.5-27B/hf_offline_eagle3.yaml`:
- Around line 24-26: The pipeline config's global_vars block defines hf_model
but is missing the required environment variables MLM_MODEL_CFG and QUANT_CFG;
update the global_vars section (where hf_model is set) to add MLM_MODEL_CFG with
the HuggingFace repo ID for this model and QUANT_CFG with the appropriate
quantization profile (e.g., NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG) so the entry
for Qwen3.5-27B conforms to launcher metadata conventions; apply the same
addition to the other occurrences noted (around lines 36-38, 54-56, 94-97) to
keep all model configs consistent.

In `@tools/launcher/examples/Qwen/Qwen3.5-35B-A3B/hf_offline_eagle3.yaml`:
- Around line 29-31: The YAML global_vars block defines hf_model but is missing
the required environment variables MLM_MODEL_CFG and QUANT_CFG; add
MLM_MODEL_CFG with the HuggingFace repo ID for this model and set QUANT_CFG to
the appropriate quant config (e.g., NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG) in
the same global_vars section and ensure any places referencing this model (the
blocks around the existing hf_model entries) propagate these two env vars so the
launcher receives both model metadata and quant config.

In `@tools/launcher/examples/Qwen/Qwen3.5-9B/hf_offline_eagle3.yaml`:
- Around line 24-25: Add the mandatory environment entries MLM_MODEL_CFG and
QUANT_CFG next to the existing global_vars hf_model definition (hf_model:
/hf-local/Qwen/Qwen3.5-9B): set MLM_MODEL_CFG to the HuggingFace repo ID for
this model and set QUANT_CFG to the chosen quantization profile (e.g.,
NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG). Repeat the same addition for the other
hf_model blocks present later in the file (the other occurrences of global_vars
with hf_model) so all model configs include both required env vars.

In `@tools/launcher/examples/StepFun/Step-3.5-Flash/hf_offline_eagle3.yaml`:
- Around line 34-35: The new model config only sets hf_model but omits the
required environment metadata; add MLM_MODEL_CFG with the HuggingFace repo ID
(e.g., the same value as hf_model or the canonical HF repo string) and set
QUANT_CFG to the appropriate quantization profile (e.g., NVFP4_DEFAULT_CFG or
INT8_DEFAULT_CFG) inside the same global_vars block so launcher code sees
standardized envs; apply the same pattern for the other model blocks in this
file that define hf_model (the other global_vars entries referenced in the
comment).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 76d008e0-290f-420a-9842-78a95a3f5988

📥 Commits

Reviewing files that changed from the base of the PR and between e2d29c8 and 7c1388a.

📒 Files selected for processing (14)
  • tools/launcher/common/eagle3/dump_offline_data_hf.sh
  • tools/launcher/examples/DeepSeek/DeepSeek-V3.2/hf_offline_eagle3.yaml
  • tools/launcher/examples/EAGLE3_TRIAGE.md
  • tools/launcher/examples/GLM/GLM-5/hf_offline_eagle3.yaml
  • tools/launcher/examples/MiniMax/MiniMax-M2.5/hf_offline_eagle3.yaml
  • tools/launcher/examples/Mistral/Ministral-3-14B/hf_offline_eagle3.yaml
  • tools/launcher/examples/Mistral/Ministral-3-8B/hf_offline_eagle3.yaml
  • tools/launcher/examples/MoonshotAI/Kimi-K2.5/hf_offline_eagle3.yaml
  • tools/launcher/examples/NVIDIA/Kimi-K2.5-NVFP4/hf_offline_eagle3.yaml
  • tools/launcher/examples/OpenAI/GPT-OSS-20B/hf_offline_eagle3.yaml
  • tools/launcher/examples/Qwen/Qwen3.5-27B/hf_offline_eagle3.yaml
  • tools/launcher/examples/Qwen/Qwen3.5-35B-A3B/hf_offline_eagle3.yaml
  • tools/launcher/examples/Qwen/Qwen3.5-9B/hf_offline_eagle3.yaml
  • tools/launcher/examples/StepFun/Step-3.5-Flash/hf_offline_eagle3.yaml

# --input-data, --output-dir, --max-seq-len, etc.
###################################################################################################

pip install datasets 2>/dev/null || true
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

cat -n tools/launcher/common/eagle3/dump_offline_data_hf.sh

Repository: NVIDIA/Model-Optimizer

Length of output: 2458


🏁 Script executed:

find . -name "compute_hidden_states_hf.py" -type f

Repository: NVIDIA/Model-Optimizer

Length of output: 148


🏁 Script executed:

cat -n ./examples/speculative_decoding/collect_hidden_states/compute_hidden_states_hf.py | head -100

Repository: NVIDIA/Model-Optimizer

Length of output: 3844


🏁 Script executed:

grep -n "load_dataset" ./examples/speculative_decoding/collect_hidden_states/compute_hidden_states_hf.py | head -5

Repository: NVIDIA/Model-Optimizer

Length of output: 230


Remove error suppression from pip install since datasets is a required dependency.

pip install datasets 2>/dev/null || true on line 34 masks installation failures completely. The Python script imports datasets at line 31 and uses it at lines 110+ (e.g., load_dataset("json", ...)). If installation fails silently, the job will fail later with an ImportError instead of revealing the actual installation error, making debugging difficult and behavior non-reproducible. Either ensure the dependency is pre-installed in the execution environment, or let the installation command fail visibly so errors are clear.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_hf.sh` at line 34, The pip
installation currently suppresses errors using "pip install datasets 2>/dev/null
|| true" which hides failures while the script imports datasets (at the top) and
later calls load_dataset("json", ...); remove the error suppression so
installation failures are visible by replacing that invocation with a plain
failing install (e.g., "pip install datasets") or ensure the environment
pre-installs the dependency; update the launcher script around the pip install
line and any documentation/bootstrap steps accordingly so ImportError/debug info
surfaces immediately.

Comment on lines +36 to +48
if [ -z ${SLURM_ARRAY_TASK_ID} ]; then
TASK_ID=0
else
echo "SLURM_ARRAY_TASK_ID ${SLURM_ARRAY_TASK_ID}"
TASK_ID=${SLURM_ARRAY_TASK_ID}
fi

if [ -z ${SLURM_ARRAY_TASK_COUNT} ]; then
TASK_COUNT=1
else
echo "SLURM_ARRAY_TASK_COUNT ${SLURM_ARRAY_TASK_COUNT}"
TASK_COUNT=${SLURM_ARRAY_TASK_COUNT}
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
rg -n '\[ -z \${SLURM_ARRAY_TASK_(ID|COUNT)\} \]|\$\{@\}' tools/launcher/common/eagle3/dump_offline_data_hf.sh

Repository: NVIDIA/Model-Optimizer

Length of output: 207


🏁 Script executed:

cat -n tools/launcher/common/eagle3/dump_offline_data_hf.sh

Repository: NVIDIA/Model-Optimizer

Length of output: 2458


🏁 Script executed:

# Search for the patterns mentioned in the review with corrected regex
rg -n '\[ -z \$' tools/launcher/common/eagle3/dump_offline_data_hf.sh -A 2

Repository: NVIDIA/Model-Optimizer

Length of output: 209


🏁 Script executed:

# Also search for ${@} pattern
rg -n '\$\{@\}' tools/launcher/common/eagle3/dump_offline_data_hf.sh

Repository: NVIDIA/Model-Optimizer

Length of output: 78


SLURM array defaults and argument forwarding have unsafe parameter expansion that will cause bash errors.

Unquoted [ -z ${...} ] will fail with "unary operator expected" when variables are unset. Unquoted ${@} breaks multi-word argument handling. Use [ -z "${VAR:-}" ] and "$@" to make this deterministic and safe.

Also, pip install datasets 2>/dev/null || true masks installation failures, and ${HF_MODEL_CKPT} is used without validation.

Proposed fix
-if [ -z ${SLURM_ARRAY_TASK_ID} ]; then
-    TASK_ID=0
-else
-    echo "SLURM_ARRAY_TASK_ID ${SLURM_ARRAY_TASK_ID}"
-    TASK_ID=${SLURM_ARRAY_TASK_ID}
-fi
+if [ -z "${SLURM_ARRAY_TASK_ID:-}" ]; then
+    TASK_ID=0
+else
+    echo "SLURM_ARRAY_TASK_ID ${SLURM_ARRAY_TASK_ID}"
+    TASK_ID="${SLURM_ARRAY_TASK_ID}"
+fi
 
-if [ -z ${SLURM_ARRAY_TASK_COUNT} ]; then
-    TASK_COUNT=1
-else
-    echo "SLURM_ARRAY_TASK_COUNT ${SLURM_ARRAY_TASK_COUNT}"
-    TASK_COUNT=${SLURM_ARRAY_TASK_COUNT}
-fi
+if [ -z "${SLURM_ARRAY_TASK_COUNT:-}" ]; then
+    TASK_COUNT=1
+else
+    echo "SLURM_ARRAY_TASK_COUNT ${SLURM_ARRAY_TASK_COUNT}"
+    TASK_COUNT="${SLURM_ARRAY_TASK_COUNT}"
+fi
 
 python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_hf.py \
@@
-    ${@}
+    "$@"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ -z ${SLURM_ARRAY_TASK_ID} ]; then
TASK_ID=0
else
echo "SLURM_ARRAY_TASK_ID ${SLURM_ARRAY_TASK_ID}"
TASK_ID=${SLURM_ARRAY_TASK_ID}
fi
if [ -z ${SLURM_ARRAY_TASK_COUNT} ]; then
TASK_COUNT=1
else
echo "SLURM_ARRAY_TASK_COUNT ${SLURM_ARRAY_TASK_COUNT}"
TASK_COUNT=${SLURM_ARRAY_TASK_COUNT}
fi
if [ -z "${SLURM_ARRAY_TASK_ID:-}" ]; then
TASK_ID=0
else
echo "SLURM_ARRAY_TASK_ID ${SLURM_ARRAY_TASK_ID}"
TASK_ID="${SLURM_ARRAY_TASK_ID}"
fi
if [ -z "${SLURM_ARRAY_TASK_COUNT:-}" ]; then
TASK_COUNT=1
else
echo "SLURM_ARRAY_TASK_COUNT ${SLURM_ARRAY_TASK_COUNT}"
TASK_COUNT="${SLURM_ARRAY_TASK_COUNT}"
fi
🧰 Tools
🪛 Shellcheck (0.11.0)

[info] 36-36: Double quote to prevent globbing and word splitting.

(SC2086)


[info] 43-43: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_hf.sh` around lines 36 - 48,
The script uses unsafe parameter expansion and masks errors; update checks to
use quoted defaults like [ -z "${SLURM_ARRAY_TASK_ID:-}" ] and [ -z
"${SLURM_ARRAY_TASK_COUNT:-}" ] and assign TASK_ID and TASK_COUNT from the
quoted values (e.g., TASK_ID="${SLURM_ARRAY_TASK_ID}" ) to avoid "unary operator
expected" errors; forward arguments using "$@" instead of ${@} to preserve
multi-word args; stop silencing pip failures by removing `2>/dev/null || true`
from the `pip install datasets` invocation so installs surface errors; and add a
validation check for HF_MODEL_CKPT (e.g., [ -n "${HF_MODEL_CKPT:-}" ] with an
explanatory error and exit) before it is used.

Comment on lines +50 to +52
python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_hf.py \
--model ${HF_MODEL_CKPT} \
--dp-rank ${TASK_ID} \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Validate HF_MODEL_CKPT before launching Python.

This script documents HF_MODEL_CKPT as required, but never checks it. Add an explicit guard so failures are immediate and actionable.

Proposed fix
+if [ -z "${HF_MODEL_CKPT:-}" ]; then
+    echo "HF_MODEL_CKPT is required"
+    exit 1
+fi
+
 python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_hf.py \
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_hf.py \
--model ${HF_MODEL_CKPT} \
--dp-rank ${TASK_ID} \
if [ -z "${HF_MODEL_CKPT:-}" ]; then
echo "HF_MODEL_CKPT is required"
exit 1
fi
python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_hf.py \
--model ${HF_MODEL_CKPT} \
--dp-rank ${TASK_ID} \
🧰 Tools
🪛 Shellcheck (0.11.0)

[info] 51-51: Double quote to prevent globbing and word splitting.

(SC2086)


[info] 52-52: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_hf.sh` around lines 50 - 52,
Add an explicit guard that checks the HF_MODEL_CKPT environment variable is set
and non-empty before running the Python command: if HF_MODEL_CKPT is missing or
empty, print a clear error mentioning HF_MODEL_CKPT and exit with a non-zero
status so the subsequent call to compute_hidden_states_hf.py (the python3
invocation that uses --model ${HF_MODEL_CKPT} and --dp-rank ${TASK_ID}) never
runs; keep the check immediately above the python3 invocation.

Comment on lines +33 to +34
global_vars:
hf_model: /hf-local/deepseek-ai/DeepSeek-V3.2
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Include mandatory MLM_MODEL_CFG and QUANT_CFG env variables

This new model config does not set the required model repo-id and quant profile envs, which can break standardized launcher behavior across tasks.

As per coding guidelines, Set MLM_MODEL_CFG environment variable to the HuggingFace repo ID when adding a new model config and Set QUANT_CFG environment variable (e.g., NVFP4_DEFAULT_CFG, INT8_DEFAULT_CFG) when adding a new model config.

Also applies to: 46-47, 64-66, 104-106

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/DeepSeek/DeepSeek-V3.2/hf_offline_eagle3.yaml` around
lines 33 - 34, The YAML global_vars block that defines hf_model (hf_model:
/hf-local/deepseek-ai/DeepSeek-V3.2) is missing the required environment
variables; set MLM_MODEL_CFG to the HuggingFace repo ID for this model (e.g.,
deepseek-ai/DeepSeek-V3.2 or the canonical repo string) and set QUANT_CFG to the
chosen quantization profile (e.g., NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG)
alongside hf_model in the same global_vars section so the launcher picks up
MLM_MODEL_CFG and QUANT_CFG for this config (apply same change for the other
blocks noted around lines 46-47, 64-66, 104-106).

Comment thread tools/launcher/examples/EAGLE3_TRIAGE.md Outdated
Comment on lines +91 to +104
args:
- --draft_model_dir /scratchspace/export
- --draft_length 3
- --output_length 4096
- --engine VLLM
- --tp_size 4
- --ep_size 1
- --speculative_algorithm EAGLE3
- --mtbench /hf-local/HuggingFaceH4/mt_bench_prompts/raw/question.jsonl
- --concurrency 1
environment:
- HF_LOCAL: /hf-local
- HF_MODEL_CKPT: <<global_vars.hf_model>>
slurm_config:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Benchmark step omits required GPT-OSS runtime settings.

task_3 is missing --trust-remote-code and does not export TIKTOKEN_RS_CACHE_DIR, even though both are required for this model family in earlier steps/comments.

Proposed fix
   task_3:
     args:
       - --draft_model_dir /scratchspace/export
       - --draft_length 3
       - --output_length 4096
       - --engine VLLM
+      - --trust-remote-code
       - --tp_size 4
       - --ep_size 1
@@
     environment:
       - HF_LOCAL: /hf-local
       - HF_MODEL_CKPT: <<global_vars.hf_model>>
+      - TIKTOKEN_RS_CACHE_DIR: /hf-local/tiktoken_cache
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/OpenAI/GPT-OSS-20B/hf_offline_eagle3.yaml` around
lines 91 - 104, task_3's runtime invocation is missing the required
--trust-remote-code flag and the TIKTOKEN_RS_CACHE_DIR environment export;
update the args array (the block containing --speculative_algorithm EAGLE3,
--engine VLLM, etc.) to include --trust-remote-code, and add a
TIKTOKEN_RS_CACHE_DIR entry to the environment list (alongside HF_LOCAL and
HF_MODEL_CKPT) pointing to the intended cache dir (e.g., /tiktoken_rs_cache or
the project's cache path) so the model runs with the same runtime settings as
earlier tasks.

Comment on lines +24 to +26
global_vars:
hf_model: /hf-local/Qwen/Qwen3.5-27B

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Required MLM_MODEL_CFG and QUANT_CFG are missing in this new model pipeline config.

Please add both env vars so this config matches launcher metadata conventions used by downstream tooling.

As per coding guidelines, Set MLM_MODEL_CFG environment variable to the HuggingFace repo ID when adding a new model config and Set QUANT_CFG environment variable (e.g., NVFP4_DEFAULT_CFG, INT8_DEFAULT_CFG) when adding a new model config.

Also applies to: 36-38, 54-56, 94-97

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/Qwen/Qwen3.5-27B/hf_offline_eagle3.yaml` around lines
24 - 26, The pipeline config's global_vars block defines hf_model but is missing
the required environment variables MLM_MODEL_CFG and QUANT_CFG; update the
global_vars section (where hf_model is set) to add MLM_MODEL_CFG with the
HuggingFace repo ID for this model and QUANT_CFG with the appropriate
quantization profile (e.g., NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG) so the entry
for Qwen3.5-27B conforms to launcher metadata conventions; apply the same
addition to the other occurrences noted (around lines 36-38, 54-56, 94-97) to
keep all model configs consistent.

Comment on lines +29 to +31
global_vars:
hf_model: /hf-local/Qwen/Qwen3.5-35B-A3B

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

New model config is missing MLM_MODEL_CFG and QUANT_CFG.

Please define and propagate both env vars in this YAML so the launcher has required model and quant metadata.

As per coding guidelines, Set MLM_MODEL_CFG environment variable to the HuggingFace repo ID when adding a new model config and Set QUANT_CFG environment variable (e.g., NVFP4_DEFAULT_CFG, INT8_DEFAULT_CFG) when adding a new model config.

Also applies to: 42-44, 59-61, 99-102

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/Qwen/Qwen3.5-35B-A3B/hf_offline_eagle3.yaml` around
lines 29 - 31, The YAML global_vars block defines hf_model but is missing the
required environment variables MLM_MODEL_CFG and QUANT_CFG; add MLM_MODEL_CFG
with the HuggingFace repo ID for this model and set QUANT_CFG to the appropriate
quant config (e.g., NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG) in the same
global_vars section and ensure any places referencing this model (the blocks
around the existing hf_model entries) propagate these two env vars so the
launcher receives both model metadata and quant config.

Comment on lines +24 to +25
global_vars:
hf_model: /hf-local/Qwen/Qwen3.5-9B
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Set mandatory MLM_MODEL_CFG and QUANT_CFG in this new config

Please add both required env vars so this model config conforms to launcher standards and avoids downstream metadata gaps.

As per coding guidelines, Set MLM_MODEL_CFG environment variable to the HuggingFace repo ID when adding a new model config and Set QUANT_CFG environment variable (e.g., NVFP4_DEFAULT_CFG, INT8_DEFAULT_CFG) when adding a new model config.

Also applies to: 36-37, 54-55, 94-96

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/Qwen/Qwen3.5-9B/hf_offline_eagle3.yaml` around lines
24 - 25, Add the mandatory environment entries MLM_MODEL_CFG and QUANT_CFG next
to the existing global_vars hf_model definition (hf_model:
/hf-local/Qwen/Qwen3.5-9B): set MLM_MODEL_CFG to the HuggingFace repo ID for
this model and set QUANT_CFG to the chosen quantization profile (e.g.,
NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG). Repeat the same addition for the other
hf_model blocks present later in the file (the other occurrences of global_vars
with hf_model) so all model configs include both required env vars.

Comment on lines +34 to +35
global_vars:
hf_model: /hf-local/stepfun-ai/Step-3.5-Flash
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add required MLM_MODEL_CFG and QUANT_CFG for this new model config

This config is missing the required model/quant env metadata, so downstream launcher logic that depends on standardized envs can break or become inconsistent.

Suggested patch shape
   global_vars:
     hf_model: /hf-local/stepfun-ai/Step-3.5-Flash
+    mlm_model_cfg: stepfun-ai/Step-3.5-Flash
+    quant_cfg: <SET_QUANT_CFG>

   task_0:
@@
     environment:
       - HF_LOCAL: /hf-local
+      - MLM_MODEL_CFG: <<global_vars.mlm_model_cfg>>
+      - QUANT_CFG: <<global_vars.quant_cfg>>

   task_1:
@@
     environment:
       - HF_MODEL_CKPT: <<global_vars.hf_model>>
+      - MLM_MODEL_CFG: <<global_vars.mlm_model_cfg>>
+      - QUANT_CFG: <<global_vars.quant_cfg>>

   task_3:
@@
     environment:
       - HF_LOCAL: /hf-local
       - HF_MODEL_CKPT: <<global_vars.hf_model>>
+      - MLM_MODEL_CFG: <<global_vars.mlm_model_cfg>>
+      - QUANT_CFG: <<global_vars.quant_cfg>>

As per coding guidelines, Set MLM_MODEL_CFG environment variable to the HuggingFace repo ID when adding a new model config and Set QUANT_CFG environment variable (e.g., NVFP4_DEFAULT_CFG, INT8_DEFAULT_CFG) when adding a new model config.

Also applies to: 47-48, 65-66, 105-107

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/StepFun/Step-3.5-Flash/hf_offline_eagle3.yaml` around
lines 34 - 35, The new model config only sets hf_model but omits the required
environment metadata; add MLM_MODEL_CFG with the HuggingFace repo ID (e.g., the
same value as hf_model or the canonical HF repo string) and set QUANT_CFG to the
appropriate quantization profile (e.g., NVFP4_DEFAULT_CFG or INT8_DEFAULT_CFG)
inside the same global_vars block so launcher code sees standardized envs; apply
the same pattern for the other model blocks in this file that define hf_model
(the other global_vars entries referenced in the comment).

@codecov
Copy link
Copy Markdown

codecov Bot commented May 8, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 77.05%. Comparing base (555be6c) to head (eb830bd).
⚠️ Report is 15 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1417      +/-   ##
==========================================
+ Coverage   76.74%   77.05%   +0.30%     
==========================================
  Files         476      478       +2     
  Lines       51307    52592    +1285     
==========================================
+ Hits        39377    40525    +1148     
- Misses      11930    12067     +137     
Flag Coverage Δ
examples 41.80% <ø> (+2.62%) ⬆️
regression 15.20% <ø> (+0.07%) ⬆️
unit 52.72% <ø> (+0.18%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

- Add compute_hidden_states_vllm.py: uses speculators.VllmHiddenStatesGenerator
  to extract hidden states via vLLM; output format identical to the HF variant
- Add dump_offline_data_vllm.sh: launcher wrapper for the vLLM backend;
  three backends now available for task_1 (TRT-LLM / HF / vLLM)
- Update EAGLE3_TRIAGE.md: mark Issue 1 as FIXED, update pipeline overview
  to show all three dump backends, update model matrix rows 1-7 to
  NEEDS RERUN (blocked by missing script in round 1, now resolved)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Ye Yu <yeyu@nvidia.com>
@yeyu-nvidia yeyu-nvidia requested a review from a team as a code owner May 11, 2026 17:40
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
tools/launcher/common/eagle3/dump_offline_data_vllm.sh (1)

36-55: 💤 Low value

Quote shell variables to prevent word splitting.

Multiple variables lack quotes in test conditions and command arguments. While unlikely to cause issues in a typical SLURM environment, quoting is a shell best practice.

♻️ Shellcheck-compliant quoting
-if [ -z ${SLURM_ARRAY_TASK_ID} ]; then
+if [ -z "${SLURM_ARRAY_TASK_ID}" ]; then
     TASK_ID=0
 else
     echo "SLURM_ARRAY_TASK_ID ${SLURM_ARRAY_TASK_ID}"
     TASK_ID=${SLURM_ARRAY_TASK_ID}
 fi

-if [ -z ${SLURM_ARRAY_TASK_COUNT} ]; then
+if [ -z "${SLURM_ARRAY_TASK_COUNT}" ]; then
     TASK_COUNT=1
 else
     echo "SLURM_ARRAY_TASK_COUNT ${SLURM_ARRAY_TASK_COUNT}"
     TASK_COUNT=${SLURM_ARRAY_TASK_COUNT}
 fi

 python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py \
-    --model ${HF_MODEL_CKPT} \
-    --dp-rank ${TASK_ID} \
-    --dp-world-size ${TASK_COUNT} \
+    --model "${HF_MODEL_CKPT}" \
+    --dp-rank "${TASK_ID}" \
+    --dp-world-size "${TASK_COUNT}" \
     --trust_remote_code \
     "$@"

As per coding guidelines, shellcheck SC2086.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh` around lines 36 - 55,
Several shell variables are unquoted which can cause word-splitting (SC2086):
wrap parameter expansions like ${SLURM_ARRAY_TASK_ID},
${SLURM_ARRAY_TASK_COUNT}, ${TASK_ID}, ${TASK_COUNT}, ${HF_MODEL_CKPT} and the
positional expansion ${@} in double quotes; update the test conditions ([ -z
"${SLURM_ARRAY_TASK_ID}" ] etc.) and the python invocation (--model
"${HF_MODEL_CKPT}" --dp-rank "${TASK_ID}" --dp-world-size "${TASK_COUNT}"
"${@}") so all referenced symbols in this script are properly quoted.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh`:
- Around line 50-51: The invocation of compute_hidden_states_vllm.py uses the
environment variable HF_MODEL_CKPT without validation; add a guard before
running the python command that checks HF_MODEL_CKPT is non-empty, prints a
clear error (e.g., "HF_MODEL_CKPT is not set"), and exits with a non-zero status
if missing so the script never calls python with an empty --model argument;
update the shell block that builds the python3 command to perform this check and
fail fast.
- Line 34: The pip install command currently hides failures via "2>/dev/null ||
true"; change it to run pip install speculators datasets and then verify success
by checking the exit code and attempting to import the packages (e.g. run python
-c "import speculators, datasets" && echo "ok" || { echo "pip install or import
failed" >&2; exit 1; }) instead of swallowing errors; replace the suppressed
invocation "pip install speculators datasets 2>/dev/null || true" with an
install+import verification sequence and emit a clear error and non-zero exit
when installation/import fails.
- Line 55: The script currently forwards arguments using ${@}, which splits
arguments with spaces; change the forwarding to use "$@" so argument boundaries
are preserved when invoking the Python script (replace ${@} with "$@" wherever
arguments are forwarded). Ensure any exec or python call that used ${@} is
updated to use the quoted form to satisfy shellcheck SC2068 and avoid breaking
multi-word arguments.

---

Nitpick comments:
In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh`:
- Around line 36-55: Several shell variables are unquoted which can cause
word-splitting (SC2086): wrap parameter expansions like ${SLURM_ARRAY_TASK_ID},
${SLURM_ARRAY_TASK_COUNT}, ${TASK_ID}, ${TASK_COUNT}, ${HF_MODEL_CKPT} and the
positional expansion ${@} in double quotes; update the test conditions ([ -z
"${SLURM_ARRAY_TASK_ID}" ] etc.) and the python invocation (--model
"${HF_MODEL_CKPT}" --dp-rank "${TASK_ID}" --dp-world-size "${TASK_COUNT}"
"${@}") so all referenced symbols in this script are properly quoted.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 8966f1a1-9e30-4747-94d9-e8de1499db9a

📥 Commits

Reviewing files that changed from the base of the PR and between 7c1388a and 642da1f.

📒 Files selected for processing (3)
  • examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py
  • tools/launcher/common/eagle3/dump_offline_data_vllm.sh
  • tools/launcher/examples/EAGLE3_TRIAGE.md
✅ Files skipped from review due to trivial changes (1)
  • examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py

# --input-data, --output-dir, --max-seq-len, etc.
###################################################################################################

pip install speculators datasets 2>/dev/null || true
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Verify pip installation success instead of suppressing errors.

Silently suppressing all pip errors with 2>/dev/null || true can hide real installation failures (network issues, missing dependencies, version conflicts). Consider verifying that the packages are importable after installation.

🛡️ Proposed fix to verify installation
-pip install speculators datasets 2>/dev/null || true
+pip install speculators datasets || {
+    echo "Warning: Failed to install speculators/datasets. Continuing anyway..."
+}
+
+python3 -c "import speculators, datasets" || {
+    echo "ERROR: Required packages not available. Exiting."
+    exit 1
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pip install speculators datasets 2>/dev/null || true
pip install speculators datasets || {
echo "Warning: Failed to install speculators/datasets. Continuing anyway..."
}
python3 -c "import speculators, datasets" || {
echo "ERROR: Required packages not available. Exiting."
exit 1
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh` at line 34, The pip
install command currently hides failures via "2>/dev/null || true"; change it to
run pip install speculators datasets and then verify success by checking the
exit code and attempting to import the packages (e.g. run python -c "import
speculators, datasets" && echo "ok" || { echo "pip install or import failed"
>&2; exit 1; }) instead of swallowing errors; replace the suppressed invocation
"pip install speculators datasets 2>/dev/null || true" with an install+import
verification sequence and emit a clear error and non-zero exit when
installation/import fails.

Comment on lines +50 to +51
python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py \
--model ${HF_MODEL_CKPT} \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Validate that HF_MODEL_CKPT is set before use.

The script assumes HF_MODEL_CKPT is set but does not validate it. If unset, the Python script will receive --model with an empty value, causing a confusing downstream failure.

🛡️ Proposed validation check
+if [ -z "${HF_MODEL_CKPT}" ]; then
+    echo "ERROR: HF_MODEL_CKPT environment variable is not set"
+    exit 1
+fi
+
 python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py \
     --model ${HF_MODEL_CKPT} \
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py \
--model ${HF_MODEL_CKPT} \
if [ -z "${HF_MODEL_CKPT}" ]; then
echo "ERROR: HF_MODEL_CKPT environment variable is not set"
exit 1
fi
python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py \
--model ${HF_MODEL_CKPT} \
🧰 Tools
🪛 Shellcheck (0.11.0)

[info] 51-51: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh` around lines 50 - 51,
The invocation of compute_hidden_states_vllm.py uses the environment variable
HF_MODEL_CKPT without validation; add a guard before running the python command
that checks HF_MODEL_CKPT is non-empty, prints a clear error (e.g.,
"HF_MODEL_CKPT is not set"), and exits with a non-zero status if missing so the
script never calls python with an empty --model argument; update the shell block
that builds the python3 command to perform this check and fail fast.

--dp-rank ${TASK_ID} \
--dp-world-size ${TASK_COUNT} \
--trust_remote_code \
${@}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Fix argument forwarding to preserve boundaries.

${@} will break arguments containing spaces. Use "$@" instead to preserve argument boundaries when forwarding CLI arguments to the Python script.

🐛 Proposed fix
     --dp-rank ${TASK_ID} \
     --dp-world-size ${TASK_COUNT} \
     --trust_remote_code \
-    ${@}
+    "$@"

As per coding guidelines, shellcheck SC2068.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
${@}
"$@"
🧰 Tools
🪛 Shellcheck (0.11.0)

[error] 55-55: Double quote array expansions to avoid re-splitting elements.

(SC2068)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh` at line 55, The
script currently forwards arguments using ${@}, which splits arguments with
spaces; change the forwarding to use "$@" so argument boundaries are preserved
when invoking the Python script (replace ${@} with "$@" wherever arguments are
forwarded). Ensure any exec or python call that used ${@} is updated to use the
quoted form to satisfy shellcheck SC2068 and avoid breaking multi-word
arguments.

nemo-run's SlurmExecutor defaults to dependency_type="afterok", which
cancels all downstream tasks when a predecessor times out or fails.
For pipelines with allow_to_fail=True, use "afterany" so subsequent
tasks run regardless of predecessor exit status.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Ye Yu <yeyu@nvidia.com>
…ript

- Add language specifiers to fenced code blocks in EAGLE3_TRIAGE.md
  (MD040: two ``` blocks needed `text` language label)
- Apply ruff-format reformatting to compute_hidden_states_vllm.py
  (line-length adjustments to match project style)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Ye Yu <yeyu@nvidia.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@tools/launcher/examples/EAGLE3_TRIAGE.md`:
- Around line 101-107: In the EAGLE3_TRIAGE table, normalize the rerun wording
and fix the typo: replace every occurrence of the string "🔁 NEEDS RERUN" with
"🔁 RERUN REQUIRED" (affecting rows like the Ministral, GPT-OSS, MiniMax,
Qwen3.5, Step-3.5, DeepSeek entries) and change the footnote/remark text "or_hf"
to "or _hf" (seen in the Step-3.5-Flash row remark). Make these textual edits in
the table cells to maintain consistent phrasing and spacing.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 92740be9-b940-44d8-a42c-23633272cbc3

📥 Commits

Reviewing files that changed from the base of the PR and between 4abca8b and d0ad01b.

📒 Files selected for processing (2)
  • examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py
  • tools/launcher/examples/EAGLE3_TRIAGE.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py

Comment on lines +101 to +107
| 1 | Ministral-3-8B | Dense | 8B | ⏱ TIMEOUT (3277/3295) | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | task_0 nearly complete (99%); t1 re-run needed |
| 2 | Ministral-3-14B | Dense | 14B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | — |
| 3 | GPT-OSS-20B | Dense | 20B | ❌ TOKENIZER | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | Fix: populate TIKTOKEN_RS_CACHE_DIR first |
| 4 | MiniMax-M2.5 | MoE | 230B/10B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | ❌ TRUST_REMOTE_CODE | trust_remote_code needed at bench |
| 5 | Qwen3.5-35B-A3B | MoE | 35B/3B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | — |
| 6 | Step-3.5-Flash | MoE/SWA | 197B/11B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | SWA: use _vllm or_hf script |
| 7 | DeepSeek-V3.2 | MoE/MLA | 685B/37B | 🔍 (tarball only) | 🔁 NEEDS RERUN (_vllm, 2-node) | 🔲 | 🔲 | 2-node; previous t1 OOM-killed |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Clean up matrix wording for consistency/readability.

Please normalize 🔁 NEEDS RERUN to something like 🔁 RERUN REQUIRED (Lines 101–107), and fix the typo or_hfor _hf on Line 106.

🧰 Tools
🪛 LanguageTool

[style] ~101-~101: The double modal “NEEDS RERUN” is nonstandard (only accepted in certain dialects). Consider “to be RERUN”.
Context: ...| 8B | ⏱ TIMEOUT (3277/3295) | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | task_0 nearly compl...

(NEEDS_FIXED)


[style] ~102-~102: The double modal “NEEDS RERUN” is nonstandard (only accepted in certain dialects). Consider “to be RERUN”.
Context: ...4B | Dense | 14B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | — | | 3 | GPT-OSS-2...

(NEEDS_FIXED)


[style] ~103-~103: The double modal “NEEDS RERUN” is nonstandard (only accepted in certain dialects). Consider “to be RERUN”.
Context: ... | Dense | 20B | ❌ TOKENIZER | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | Fix: populate TIKTO...

(NEEDS_FIXED)


[style] ~104-~104: The double modal “NEEDS RERUN” is nonstandard (only accepted in certain dialects). Consider “to be RERUN”.
Context: ...| MoE | 230B/10B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | ❌ TRUST_REMOTE_CODE | tr...

(NEEDS_FIXED)


[style] ~105-~105: The double modal “NEEDS RERUN” is nonstandard (only accepted in certain dialects). Consider “to be RERUN”.
Context: ...B | MoE | 35B/3B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | — | | 6 | Step-3.5-...

(NEEDS_FIXED)


[style] ~106-~106: The double modal “NEEDS RERUN” is nonstandard (only accepted in certain dialects). Consider “to be RERUN”.
Context: ...E/SWA | 197B/11B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | SWA: use _vllm or_h...

(NEEDS_FIXED)


[style] ~107-~107: The double modal “NEEDS RERUN” is nonstandard (only accepted in certain dialects). Consider “to be RERUN”.
Context: ...685B/37B | 🔍 (tarball only) | 🔁 NEEDS RERUN (_vllm, 2-node) | 🔲 | 🔲 | 2-node; pre...

(NEEDS_FIXED)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/EAGLE3_TRIAGE.md` around lines 101 - 107, In the
EAGLE3_TRIAGE table, normalize the rerun wording and fix the typo: replace every
occurrence of the string "🔁 NEEDS RERUN" with "🔁 RERUN REQUIRED" (affecting
rows like the Ministral, GPT-OSS, MiniMax, Qwen3.5, Step-3.5, DeepSeek entries)
and change the footnote/remark text "or_hf" to "or _hf" (seen in the
Step-3.5-Flash row remark). Make these textual edits in the table cells to
maintain consistent phrasing and spacing.

- dump_offline_data_vllm.sh: pin speculators<0.5.0 to fix ImportError
  (VllmHiddenStatesGenerator removed in speculators 0.5.0)
- EAGLE3_TRIAGE.md: add Issues 6 (speculators API break) and 7
  (query.py shard auto-downgrade causing empty data on timeout);
  mark both FIXED; update Ministral-3-8B row status

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Ye Yu <yeyu@nvidia.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (4)
tools/launcher/common/eagle3/dump_offline_data_vllm.sh (2)

34-34: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Do not suppress dependency install failures in the launcher path.

Line 34 hides install errors (2>/dev/null || true), so the job can continue and fail later in a harder-to-debug way.

Proposed fix
-pip install "speculators<0.5.0" datasets 2>/dev/null || true
+if ! python3 -m pip install "speculators<0.5.0" datasets; then
+    echo "ERROR: failed to install required packages: speculators, datasets" >&2
+    exit 1
+fi
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh` at line 34, The pip
install command in dump_offline_data_vllm.sh currently suppresses failures ("pip
install \"speculators<0.5.0\" datasets 2>/dev/null || true"), which hides
install errors; update that invocation so install failures are not
silenced—remove the stderr redirection and the "|| true" so the script fails
fast (or explicitly exit non-zero on failure) when "pip install" for
speculators/datasets fails, ensuring the error surfaces during launcher runs.

50-55: ⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Harden Python arg construction: validate model env and preserve argv boundaries.

Line 51 should fail fast if HF_MODEL_CKPT is unset, and Line 55 must use "$@" (not ${@}) to preserve argument boundaries.

Proposed fix
+if [ -z "${HF_MODEL_CKPT:-}" ]; then
+    echo "ERROR: HF_MODEL_CKPT is not set" >&2
+    exit 1
+fi
+
 python3 modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py \
-    --model ${HF_MODEL_CKPT} \
-    --dp-rank ${TASK_ID} \
-    --dp-world-size ${TASK_COUNT} \
+    --model "${HF_MODEL_CKPT}" \
+    --dp-rank "${TASK_ID}" \
+    --dp-world-size "${TASK_COUNT}" \
     --trust_remote_code \
-    ${@}
+    "$@"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh` around lines 50 - 55,
Ensure the script fails fast if HF_MODEL_CKPT is unset and preserve argument
boundaries when forwarding CLI args: add a check for the environment variable
HF_MODEL_CKPT (exit non‑zero with an error log if empty) before invoking
compute_hidden_states_vllm.py, and change the final argument expansion from ${@}
to "$@" when calling python3
modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py
that uses --model ${HF_MODEL_CKPT} --dp-rank ${TASK_ID} --dp-world-size
${TASK_COUNT} so TASK_ID/TASK_COUNT remain unchanged.
tools/launcher/examples/EAGLE3_TRIAGE.md (2)

106-106: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Fix typo in script reference.

The text "or_hf" should be "or _hf" (space before underscore) to match the script naming pattern _vllm or _hf.

📝 Suggested fix
-| 6 | Step-3.5-Flash | MoE/SWA | 197B/11B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | SWA: use _vllm or_hf script |
+| 6 | Step-3.5-Flash | MoE/SWA | 197B/11B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | SWA: use _vllm or _hf script |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/EAGLE3_TRIAGE.md` at line 106, The table cell
contains a typo "or_hf" and should be changed to "or _hf" to match the script
naming pattern; update the text in the row containing "Step-3.5-Flash" so the
note reads "SWA: use _vllm or _hf" (referencing the script names _vllm and _hf)
to ensure consistent spacing and naming.

102-107: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Normalize status wording for grammar and consistency.

The phrase "🔁 NEEDS RERUN" is grammatically nonstandard. Consider:

  • "🔁 RERUN NEEDED" or "🔁 RERUN REQUIRED" to maintain the semantic distinction from "🔁 RERUNNING" (which indicates active re-execution)

This preserves the meaningful status distinction while improving readability.

📝 Suggested fix
-| 2 | Ministral-3-14B | Dense | 14B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | — |
-| 3 | GPT-OSS-20B | Dense | 20B | ❌ TOKENIZER | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | Fix: populate TIKTOKEN_RS_CACHE_DIR first |
-| 4 | MiniMax-M2.5 | MoE | 230B/10B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | ❌ TRUST_REMOTE_CODE | trust_remote_code needed at bench |
-| 5 | Qwen3.5-35B-A3B | MoE | 35B/3B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | — |
-| 6 | Step-3.5-Flash | MoE/SWA | 197B/11B | ⏱ TIMEOUT | 🔁 NEEDS RERUN (_vllm) | 🔲 | 🔲 | SWA: use _vllm or_hf script |
-| 7 | DeepSeek-V3.2 | MoE/MLA | 685B/37B | 🔍 (tarball only) | 🔁 NEEDS RERUN (_vllm, 2-node) | 🔲 | 🔲 | 2-node; previous t1 OOM-killed |
+| 2 | Ministral-3-14B | Dense | 14B | ⏱ TIMEOUT | 🔁 RERUN NEEDED (_vllm) | 🔲 | 🔲 | — |
+| 3 | GPT-OSS-20B | Dense | 20B | ❌ TOKENIZER | 🔁 RERUN NEEDED (_vllm) | 🔲 | 🔲 | Fix: populate TIKTOKEN_RS_CACHE_DIR first |
+| 4 | MiniMax-M2.5 | MoE | 230B/10B | ⏱ TIMEOUT | 🔁 RERUN NEEDED (_vllm) | 🔲 | ❌ TRUST_REMOTE_CODE | trust_remote_code needed at bench |
+| 5 | Qwen3.5-35B-A3B | MoE | 35B/3B | ⏱ TIMEOUT | 🔁 RERUN NEEDED (_vllm) | 🔲 | 🔲 | — |
+| 6 | Step-3.5-Flash | MoE/SWA | 197B/11B | ⏱ TIMEOUT | 🔁 RERUN NEEDED (_vllm) | 🔲 | 🔲 | SWA: use _vllm or _hf script |
+| 7 | DeepSeek-V3.2 | MoE/MLA | 685B/37B | 🔍 (tarball only) | 🔁 RERUN NEEDED (_vllm, 2-node) | 🔲 | 🔲 | 2-node; previous t1 OOM-killed |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/EAGLE3_TRIAGE.md` around lines 102 - 107, Replace the
nonstandard status string "🔁 NEEDS RERUN" throughout the table with a
grammatically consistent alternative such as "🔁 RERUN NEEDED" (or "🔁 RERUN
REQUIRED") so the status column reads uniformly; update every occurrence of the
literal token "🔁 NEEDS RERUN" (e.g., the cell "🔁 NEEDS RERUN (_vllm)") to the
chosen wording and ensure any adjacent parenthetical notes like "(_vllm)" remain
unchanged.
🧹 Nitpick comments (1)
tools/launcher/common/eagle3/dump_offline_data_vllm.sh (1)

36-48: ⚡ Quick win

Use defaulted, quoted SLURM env checks to avoid brittle test behavior.

Line 36 and Line 43 should use ${VAR:-} with quotes; current form is fragile for unset/nounset contexts and non-idiomatic.

Proposed fix
-if [ -z ${SLURM_ARRAY_TASK_ID} ]; then
+if [ -z "${SLURM_ARRAY_TASK_ID:-}" ]; then
@@
-if [ -z ${SLURM_ARRAY_TASK_COUNT} ]; then
+if [ -z "${SLURM_ARRAY_TASK_COUNT:-}" ]; then
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh` around lines 36 - 48,
The checks and assignments for SLURM_ARRAY_TASK_ID and SLURM_ARRAY_TASK_COUNT
are fragile; update the conditional tests and assignments to use quoted
defaulted parameter expansion (e.g. use "${SLURM_ARRAY_TASK_ID:-}" and
"${SLURM_ARRAY_TASK_COUNT:-}") so the [ -z ... ] checks are safe when variables
are unset or set to empty and the TASK_ID/TASK_COUNT assignments use the same
quoted expansions; modify the if-test expressions and the TASK_ID/TASK_COUNT
assignment expressions accordingly to reference "${SLURM_ARRAY_TASK_ID:-}" and
"${SLURM_ARRAY_TASK_COUNT:-}".
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Duplicate comments:
In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh`:
- Line 34: The pip install command in dump_offline_data_vllm.sh currently
suppresses failures ("pip install \"speculators<0.5.0\" datasets 2>/dev/null ||
true"), which hides install errors; update that invocation so install failures
are not silenced—remove the stderr redirection and the "|| true" so the script
fails fast (or explicitly exit non-zero on failure) when "pip install" for
speculators/datasets fails, ensuring the error surfaces during launcher runs.
- Around line 50-55: Ensure the script fails fast if HF_MODEL_CKPT is unset and
preserve argument boundaries when forwarding CLI args: add a check for the
environment variable HF_MODEL_CKPT (exit non‑zero with an error log if empty)
before invoking compute_hidden_states_vllm.py, and change the final argument
expansion from ${@} to "$@" when calling python3
modules/Model-Optimizer/examples/speculative_decoding/collect_hidden_states/compute_hidden_states_vllm.py
that uses --model ${HF_MODEL_CKPT} --dp-rank ${TASK_ID} --dp-world-size
${TASK_COUNT} so TASK_ID/TASK_COUNT remain unchanged.

In `@tools/launcher/examples/EAGLE3_TRIAGE.md`:
- Line 106: The table cell contains a typo "or_hf" and should be changed to "or
_hf" to match the script naming pattern; update the text in the row containing
"Step-3.5-Flash" so the note reads "SWA: use _vllm or _hf" (referencing the
script names _vllm and _hf) to ensure consistent spacing and naming.
- Around line 102-107: Replace the nonstandard status string "🔁 NEEDS RERUN"
throughout the table with a grammatically consistent alternative such as "🔁
RERUN NEEDED" (or "🔁 RERUN REQUIRED") so the status column reads uniformly;
update every occurrence of the literal token "🔁 NEEDS RERUN" (e.g., the cell
"🔁 NEEDS RERUN (_vllm)") to the chosen wording and ensure any adjacent
parenthetical notes like "(_vllm)" remain unchanged.

---

Nitpick comments:
In `@tools/launcher/common/eagle3/dump_offline_data_vllm.sh`:
- Around line 36-48: The checks and assignments for SLURM_ARRAY_TASK_ID and
SLURM_ARRAY_TASK_COUNT are fragile; update the conditional tests and assignments
to use quoted defaulted parameter expansion (e.g. use "${SLURM_ARRAY_TASK_ID:-}"
and "${SLURM_ARRAY_TASK_COUNT:-}") so the [ -z ... ] checks are safe when
variables are unset or set to empty and the TASK_ID/TASK_COUNT assignments use
the same quoted expansions; modify the if-test expressions and the
TASK_ID/TASK_COUNT assignment expressions accordingly to reference
"${SLURM_ARRAY_TASK_ID:-}" and "${SLURM_ARRAY_TASK_COUNT:-}".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 3d9ab791-8b27-4bda-bfce-ac5d649d71dc

📥 Commits

Reviewing files that changed from the base of the PR and between d0ad01b and eb830bd.

📒 Files selected for processing (2)
  • tools/launcher/common/eagle3/dump_offline_data_vllm.sh
  • tools/launcher/examples/EAGLE3_TRIAGE.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant