ci: add HF doc contract test for pretraining instantiation#1802
Draft
ci: add HF doc contract test for pretraining instantiation#1802
Conversation
Add 13 recipe YAMLs from the NMP customizer service's compile_automodel_config() output. These serve as contract tests — if any stop working with finetune.py, it means a breaking change was introduced that affects the customizer integration. Configs cover 4 model families across SFT, PEFT, chat template, and sequence packing axes: - GPT-OSS 20B (MoE): full_sft, chat, peft, peft+packing - Llama 3.1 8B: full_sft with TP=2 - Llama 3.2 1B: full_sft, chat, peft, peft+packing - Nemotron Nano V3 (MoE): full_sft, chat, peft, peft+packing Sample datasets will be placed on the CI cluster; data paths overridden via CLI args at runtime. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: adil-a <adil.asif2000@hotmail.com>
- Add ci.checkpoint_robustness sections to all 13 customizer YAMLs with model-specific KL thresholds matching existing configs - Update finetune_launcher.sh to detect customizer/ configs and override dataset paths for both finetune and robustness phases - Register dataset.path_or_dataset_id in conftest.py so pytest accepts the CLI override without aborting collection Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: adil-a <adil.asif2000@hotmail.com>
The base config already sets tp_size: 2 in the distributed section, so the checkpoint_robustness override was redundant. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: adil-a <adil.asif2000@hotmail.com>
NEMO_CI_PATH is the correct env var on eos CI (/lustre/fsw/coreai_dlalgo_ci/automodel_ci), not TEST_DATA_DIR. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Resolve conflict in finetune_launcher.sh: keep global_batch_size 32
from main (multi-node compat fix) and ${CUSTOMIZER_DATASET_ARGS:-}
from this branch (customizer contract test support).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Move customizer configs from flat examples/llm_finetune/customizer/ into their respective model-family directories with customizer_ prefix, matching the established llm_finetune directory pattern. - gpt_oss: 4 configs - llama3_1: 1 config - llama3_2: 4 configs - nemotron: 4 configs Update nightly_recipes.yml to integrate customizer entries into existing model sections. Update finetune_launcher.sh glob from *customizer/* to *customizer_* for filename-based detection. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: adil-a <adil.asif2000@hotmail.com>
All example YAMLs require a `recipe:` key for the unit test `test_example_config_has_recipe_target`. The customizer configs were missing it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: adil-a <adil.asif2000@hotmail.com>
Adds a functional test that mirrors the exact Python code from the HuggingFace pretraining integration doc, verifying that the documented API (setup_distributed + NeMoAutoModelForCausalLM.from_pretrained with EP=8) works end-to-end on 8 GPUs. If this test breaks, either the code or the HF doc needs updating. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: adil-a <adil.asif2000@hotmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
setup_distributed+NeMoAutoModelForCausalLM.from_pretrainedwith EP=8 on 8 GPUsnemotron_nano_v3_hellaswag_peft.yamlHF doc verification notes
nemotron_nano_v3_hellaswag_peft.yamlconfig ✅setup_distributed,from_pretrainedkwargs) matches the codebase ✅-–instead of--) — should be fixed in a separate HF docs PRTest plan
bash L2_HF_Doc_Pretrain_Instantiation.shon 8x H100 — passedpytest --collect-onlyconfirms the new test is collectedruff checkandruff formatpass🤖 Generated with Claude Code