Skip to content

[OMNIML-4672] training_support#1477

Closed
ChenhanYu wants to merge 1 commit into
mainfrom
pensieve-intern/OMNIML-4666/training-support
Closed

[OMNIML-4672] training_support#1477
ChenhanYu wants to merge 1 commit into
mainfrom
pensieve-intern/OMNIML-4666/training-support

Conversation

@ChenhanYu
Copy link
Copy Markdown
Collaborator

@ChenhanYu ChenhanYu commented May 13, 2026

Draft PR opened by pensieve-intern for OMNIML-4672.

Stage training_support of Epic OMNIML-4666. The agent ran from the SPEC on the ticket description; review every change before marking ready.

Always-draft is enforced — the bot never auto-merges.

Summary by CodeRabbit

  • Chores
    • Added a new training workflow with an EAGLE3-compatible recipe, configurable model path, offline hidden-state data support, adjustable sequence length, and validation/TQDM toggles.
    • Includes runtime settings for cluster/container execution with GPU allocation and container image specification.

Review Change Stack

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 13, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 13, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 1343230f-7735-4a9b-9365-d1f31d297c52

📥 Commits

Reviewing files that changed from the base of the PR and between 341c8fa and 4300a1d.

📒 Files selected for processing (1)
  • tools/launcher/examples/Qwen/Qwen3-8B/step3_train.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • tools/launcher/examples/Qwen/Qwen3-8B/step3_train.yaml

📝 Walkthrough

Walkthrough

Adds a new YAML pipeline at tools/launcher/examples/Qwen/Qwen3-8B/step3_train.yaml defining a Qwen3-8B_EAGLE3_train job with a global hf_model path, one training task invoking common/eagle3/train_eagle.sh with offline EAGLE3 args, and SLURM/container runtime settings.

Changes

Qwen3-8B EAGLE3 Training Configuration

Layer / File(s) Summary
Qwen3-8B EAGLE3 training pipeline configuration
tools/launcher/examples/Qwen/Qwen3-8B/step3_train.yaml
Pipeline YAML defines global hf_model path variable, a single training task (task_0) that invokes common/eagle3/train_eagle.sh with an EAGLE3 recipe config, offline training data path, output directory, sequence length, validation/TQDM flags, and SLURM settings for 1 node, 1 task/node, 8 GPUs/node, using container nvcr.io/nvidia/tensorrt-llm/release:1.2.0.

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 5 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title '[OMNIML-4672] training_support' is related to the changeset, which adds a training configuration file for Qwen3-8B, but is vague and lacks specificity about the actual change. Consider making the title more specific about what was added, such as 'Add Qwen3-8B EAGLE3 training pipeline configuration' to clearly convey the primary change.
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Security Anti-Patterns ✅ Passed PR adds only a YAML configuration file (step3_train.yaml) with no Python code or dependency changes. Security anti-patterns check applies only to Python changes, making it not applicable here.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch pensieve-intern/OMNIML-4666/training-support

Comment @coderabbitai help to get the list of available commands and usage tips.

@ChenhanYu ChenhanYu marked this pull request as ready for review May 13, 2026 17:42
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@tools/launcher/examples/Qwen/Qwen3-8B/step3_train.yaml`:
- Around line 11-21: Add an environment block to task_0 that sets the required
model env vars: include MLM_MODEL_CFG with the HuggingFace repo ID for this
model and QUANT_CFG with the chosen quantization config (e.g., NVFP4_DEFAULT_CFG
or INT8_DEFAULT_CFG); ensure the environment uses the project-required
list-of-single-key-dicts format (each env var as its own single-key mapping) so
tools/launcher parsing and downstream scripts like common/eagle3/train_eagle.sh
can read MLM_MODEL_CFG and QUANT_CFG.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 2dac94de-3121-43ac-acaa-ec0b95ccc86e

📥 Commits

Reviewing files that changed from the base of the PR and between 62401e1 and 82d963b.

📒 Files selected for processing (1)
  • tools/launcher/examples/Qwen/Qwen3-8B/step3_train.yaml

Comment on lines +11 to +21
task_0:
script: common/eagle3/train_eagle.sh
args:
- --config modules/Model-Optimizer/modelopt_recipes/general/speculative_decoding/eagle3.yaml
- model.model_name_or_path=<<global_vars.hf_model>>
- data.offline_data_path=/scratchspace/offline_hidden_states
- training.output_dir=/scratchspace/eagle3
- training.training_seq_len=4096
- training.disable_tqdm=true
- training.ar_validate_steps=500000
slurm_config:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add required model environment variables for this new config.

task_0 is missing the required environment block with MLM_MODEL_CFG (HF repo ID) and QUANT_CFG, and the env format should be list-of-single-key-dicts.

Suggested patch
   task_0:
     script: common/eagle3/train_eagle.sh
+    environment:
+      - MLM_MODEL_CFG: Qwen/Qwen3-8B
+      - QUANT_CFG: NVFP4_DEFAULT_CFG
     args:
       - --config modules/Model-Optimizer/modelopt_recipes/general/speculative_decoding/eagle3.yaml
       - model.model_name_or_path=<<global_vars.hf_model>>

As per coding guidelines, tools/launcher/**/*.yaml requires “environment as list-of-single-key-dicts”, “Set MLM_MODEL_CFG environment variable to the HuggingFace repo ID when adding a new model config”, and “Set QUANT_CFG environment variable (e.g., NVFP4_DEFAULT_CFG, INT8_DEFAULT_CFG) when adding a new model config”.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
task_0:
script: common/eagle3/train_eagle.sh
args:
- --config modules/Model-Optimizer/modelopt_recipes/general/speculative_decoding/eagle3.yaml
- model.model_name_or_path=<<global_vars.hf_model>>
- data.offline_data_path=/scratchspace/offline_hidden_states
- training.output_dir=/scratchspace/eagle3
- training.training_seq_len=4096
- training.disable_tqdm=true
- training.ar_validate_steps=500000
slurm_config:
task_0:
script: common/eagle3/train_eagle.sh
environment:
- MLM_MODEL_CFG: Qwen/Qwen3-8B
- QUANT_CFG: NVFP4_DEFAULT_CFG
args:
- --config modules/Model-Optimizer/modelopt_recipes/general/speculative_decoding/eagle3.yaml
- model.model_name_or_path=<<global_vars.hf_model>>
- data.offline_data_path=/scratchspace/offline_hidden_states
- training.output_dir=/scratchspace/eagle3
- training.training_seq_len=4096
- training.disable_tqdm=true
- training.ar_validate_steps=500000
slurm_config:
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tools/launcher/examples/Qwen/Qwen3-8B/step3_train.yaml` around lines 11 - 21,
Add an environment block to task_0 that sets the required model env vars:
include MLM_MODEL_CFG with the HuggingFace repo ID for this model and QUANT_CFG
with the chosen quantization config (e.g., NVFP4_DEFAULT_CFG or
INT8_DEFAULT_CFG); ensure the environment uses the project-required
list-of-single-key-dicts format (each env var as its own single-key mapping) so
tools/launcher parsing and downstream scripts like common/eagle3/train_eagle.sh
can read MLM_MODEL_CFG and QUANT_CFG.

Agent-authored via pensieve-intern's training_support stage on
Epic OMNIML-4666. Faithful extraction of task_2 (EAGLE3 draft-head
training) from hf_offline_eagle3.yaml's monolithic pipeline,
renamed task_0 for the standalone step convention.

Signed-off-by: Chenhan D. Yu <chenhany@nvidia.com>
@ChenhanYu ChenhanYu force-pushed the pensieve-intern/OMNIML-4666/training-support branch from 341c8fa to 4300a1d Compare May 13, 2026 17:49
@codecov
Copy link
Copy Markdown

codecov Bot commented May 13, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 76.78%. Comparing base (62401e1) to head (4300a1d).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1477      +/-   ##
==========================================
- Coverage   76.78%   76.78%   -0.01%     
==========================================
  Files         473      473              
  Lines       51413    51413              
==========================================
- Hits        39476    39475       -1     
- Misses      11937    11938       +1     
Flag Coverage Δ
unit 52.55% <ø> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@ChenhanYu
Copy link
Copy Markdown
Collaborator Author

Closing — wrong artifact shape.

Each pensieve-intern support stage was authoring a separate step<N>_<name>.yaml (extracted from the monolithic 4-task hf_offline_eagle3.yaml). On reflection, the right design is:

  • One PR per model release, against the single tools/launcher/examples/<Family>/<model>/hf_offline_eagle3.yaml
  • Each support stage's Done = its task block within that one file works for the model
  • For models that already have hf_offline_eagle3.yaml on main (like Qwen3-8B), support stages are mostly no-op smoke checks

Pensieve-intern v0.33.33 will land this redesign — workflow.yaml switches to the monolithic YAML + per-task slurm.py invocation, and the support-stage SPECs become verify-task-X instead of extract-task-X. Re-materializing the Qwen3-8B Epic after that lands will produce the cleaner single-PR shape.

@ChenhanYu ChenhanYu closed this May 13, 2026
@ChenhanYu ChenhanYu deleted the pensieve-intern/OMNIML-4666/training-support branch May 13, 2026 19:09
@github-actions
Copy link
Copy Markdown
Contributor

PR Preview Action v1.8.1
Preview removed because the pull request was closed.
2026-05-13 19:09 UTC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant