Skip to content

ModelOpt config system documentation#1472

Draft
shengliangxu wants to merge 16 commits into
mainfrom
shengliangx/modelopt-config
Draft

ModelOpt config system documentation#1472
shengliangxu wants to merge 16 commits into
mainfrom
shengliangx/modelopt-config

Conversation

@shengliangxu
Copy link
Copy Markdown
Collaborator

What does this PR do?

Type of change: documentation

Adds a new Sphinx guide for the ModelOpt config system. The guide focuses on the general config contract and semantics, including ModeloptBaseConfig schemas, validation boundaries, YAML loading, YAML persistence, checkpoint persistence, schema evolution, and composable YAML with imports / $import.

It also documents why ModelOpt uses a small YAML DSL for composition: config files stay self-describing, reusable fragments can be authored as YAML, and resolved values are still validated against Python schemas. Recipes are presented as one of the main applications of the shared config system.

Usage

from modelopt.recipe import load_config
from modelopt.torch.quantization.config import QuantizeConfig

data = load_config("configs/ptq/presets/model/fp8", schema_type=QuantizeConfig)
cfg = QuantizeConfig.model_validate(data)
resolved = cfg.model_dump()

Testing

  • Ran git diff --cached --check
  • Verified 11_config_system.rst has no stale TypedDict references after rebase
  • Did not run full Sphinx build because sphinx is unavailable

Before your PR is "Ready for review"

  • Is this change backward compatible?: ✅
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: N/A
  • Did you write any new necessary tests?: N/A
  • Did you update Changelog?: N/A
  • Did you get Claude approval on this PR?: N/A

Additional Information

N/A

Have load_config return Pydantic-normalized values when schema_type or modelopt-schema is present, including typed recipe metadata and quantization config entries.

Update recipe loading, docs, and unit tests for typed config objects and normalized quant_cfg handling.

Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Convert QuantizerCfgEntry into a ModeloptBaseConfig-backed Pydantic model with validation while preserving dict-style access for callers.

Normalize schema-loaded quant_cfg snippets through model_dump, simplify quantizer cfg handling, and cover both dict and QuantizeConfig need_calibration inputs.

Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Update normalize_quant_cfg_list to accept dict entries, typed entries, and legacy dict formats while returning QuantizerCfgEntry objects.

Preserve already parsed entries, handle implicit enable values in consumers, and cover mixed typed/dict inputs in tests.

Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Make ModeloptBaseConfig a MutableMapping and use Mapping/MutableMapping protocol checks for typed quantizer config entries and attributes.

Convert predefined quantization recipes to QuantizeConfig objects while preserving dict-style callers and compatibility paths.

Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Cover normalization after mutating raw dict quantizer entries and schema-backed ModeloptBaseConfig entries.

Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
Signed-off-by: Shengliang Xu <shengliangx@nvidia.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 12, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 12, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: a448bb18-8927-4d05-99fa-1909e7c190d0

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch shengliangx/modelopt-config

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

PR Preview Action v1.8.1

QR code for preview link

🚀 View preview at
https://NVIDIA.github.io/Model-Optimizer/pr-preview/pr-1472/

Built to branch gh-pages at 2026-05-12 16:48 UTC.
Preview will be ready when the GitHub Pages deployment is complete.

@codecov
Copy link
Copy Markdown

codecov Bot commented May 12, 2026

Codecov Report

❌ Patch coverage is 92.22973% with 23 lines in your changes missing coverage. Please review.
✅ Project coverage is 76.76%. Comparing base (a098759) to head (a5e5062).
⚠️ Report is 7 commits behind head on main.

Files with missing lines Patch % Lines
modelopt/torch/quantization/config.py 93.12% 11 Missing ⚠️
modelopt/torch/quantization/algorithms.py 80.00% 4 Missing ⚠️
...torch/quantization/backends/fp8_per_tensor_gemm.py 82.35% 3 Missing ⚠️
modelopt/torch/opt/config.py 93.33% 2 Missing ⚠️
...delopt/onnx/llm_export_utils/quantization_utils.py 0.00% 1 Missing ⚠️
modelopt/torch/opt/config_loader.py 91.66% 1 Missing ⚠️
modelopt/torch/quantization/backends/nvfp4_gemm.py 92.30% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1472      +/-   ##
==========================================
- Coverage   76.91%   76.76%   -0.16%     
==========================================
  Files         478      478              
  Lines       51434    52171     +737     
==========================================
+ Hits        39563    40047     +484     
- Misses      11871    12124     +253     
Flag Coverage Δ
unit 52.78% <81.75%> (+0.18%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant