Skip to content

ci: add HF doc contract test for pretraining instantiation#1802

Draft
adil-a wants to merge 10 commits intomainfrom
adil-a/hf-doc-contract-tests
Draft

ci: add HF doc contract test for pretraining instantiation#1802
adil-a wants to merge 10 commits intomainfrom
adil-a/hf-doc-contract-tests

Conversation

@adil-a
Copy link
Copy Markdown
Collaborator

@adil-a adil-a commented Apr 13, 2026

Summary

  • Adds a functional test that mirrors the exact Python code from the HuggingFace pretraining integration doc, verifying setup_distributed + NeMoAutoModelForCausalLM.from_pretrained with EP=8 on 8 GPUs
  • If this test breaks, either the Automodel code or the HF doc needs updating
  • The finetuning doc is already covered by the nightly CI test for nemotron_nano_v3_hellaswag_peft.yaml

HF doc verification notes

  • Finetuning doc YAML snippets match the actual nemotron_nano_v3_hellaswag_peft.yaml config ✅
  • Pretraining doc Python API (setup_distributed, from_pretrained kwargs) matches the codebase ✅
  • ⚠️ The finetuning doc has an en-dash typo in the torchrun command (-– instead of --) — should be fixed in a separate HF docs PR

Test plan

  • Verified with bash L2_HF_Doc_Pretrain_Instantiation.sh on 8x H100 — passed
  • pytest --collect-only confirms the new test is collected
  • ruff check and ruff format pass

🤖 Generated with Claude Code

adil-a and others added 10 commits April 7, 2026 14:12
Add 13 recipe YAMLs from the NMP customizer service's
compile_automodel_config() output. These serve as contract tests —
if any stop working with finetune.py, it means a breaking change
was introduced that affects the customizer integration.

Configs cover 4 model families across SFT, PEFT, chat template,
and sequence packing axes:
- GPT-OSS 20B (MoE): full_sft, chat, peft, peft+packing
- Llama 3.1 8B: full_sft with TP=2
- Llama 3.2 1B: full_sft, chat, peft, peft+packing
- Nemotron Nano V3 (MoE): full_sft, chat, peft, peft+packing

Sample datasets will be placed on the CI cluster; data paths
overridden via CLI args at runtime.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
- Add ci.checkpoint_robustness sections to all 13 customizer YAMLs
  with model-specific KL thresholds matching existing configs
- Update finetune_launcher.sh to detect customizer/ configs and
  override dataset paths for both finetune and robustness phases
- Register dataset.path_or_dataset_id in conftest.py so pytest
  accepts the CLI override without aborting collection

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
The base config already sets tp_size: 2 in the distributed section,
so the checkpoint_robustness override was redundant.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
NEMO_CI_PATH is the correct env var on eos CI
(/lustre/fsw/coreai_dlalgo_ci/automodel_ci), not TEST_DATA_DIR.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Resolve conflict in finetune_launcher.sh: keep global_batch_size 32
from main (multi-node compat fix) and ${CUSTOMIZER_DATASET_ARGS:-}
from this branch (customizer contract test support).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Move customizer configs from flat examples/llm_finetune/customizer/
into their respective model-family directories with customizer_ prefix,
matching the established llm_finetune directory pattern.

- gpt_oss: 4 configs
- llama3_1: 1 config
- llama3_2: 4 configs
- nemotron: 4 configs

Update nightly_recipes.yml to integrate customizer entries into existing
model sections. Update finetune_launcher.sh glob from *customizer/* to
*customizer_* for filename-based detection.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
All example YAMLs require a `recipe:` key for the unit test
`test_example_config_has_recipe_target`. The customizer configs were
missing it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Adds a functional test that mirrors the exact Python code from the
HuggingFace pretraining integration doc, verifying that the documented
API (setup_distributed + NeMoAutoModelForCausalLM.from_pretrained with
EP=8) works end-to-end on 8 GPUs. If this test breaks, either the code
or the HF doc needs updating.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 13, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@adil-a adil-a added the r0.4.0 Auto-cherrypick to release branch. Apply before merge; cherrypick happens after merge. label Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

r0.4.0 Auto-cherrypick to release branch. Apply before merge; cherrypick happens after merge.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant