Skip to content

[TRTLLM-10673][chore] refine some LLM API args#12135

Open
QiJune wants to merge 1 commit intoNVIDIA:mainfrom
QiJune:refine_llm
Open

[TRTLLM-10673][chore] refine some LLM API args#12135
QiJune wants to merge 1 commit intoNVIDIA:mainfrom
QiJune:refine_llm

Conversation

@QiJune
Copy link
Collaborator

@QiJune QiJune commented Mar 12, 2026

Summary by CodeRabbit

  • New Features

    • Added attention distribution parallel configuration option for expanded parallelization flexibility.
  • Bug Fixes

    • Added validation warnings for incompatible configuration combinations to prevent unintended behavior.
  • Deprecations

    • Marked several configuration options as deprecated; removed RING context parallelization type support.
  • Documentation

    • Expanded configuration parameter descriptions and metadata for improved clarity.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
@QiJune QiJune requested review from a team as code owners March 12, 2026 02:05
@QiJune QiJune requested review from lucaslie and syuoni March 12, 2026 02:05
@QiJune QiJune changed the title [TRTLLM-10673] refine some LLM API args [TRTLLM-10673][chore] refine some LLM API args Mar 12, 2026
@QiJune
Copy link
Collaborator Author

QiJune commented Mar 12, 2026

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 12, 2026

📝 Walkthrough

Walkthrough

This pull request introduces deprecation warnings for legacy configuration fields across the LLM API layer, removes an unused enum member, and adds validation logic for MTP speculative decoding configuration. Status metadata is updated in API stability references to reflect new field statuses.

Changes

Cohort / File(s) Summary
Model Configuration Validation
tensorrt_llm/_torch/models/modeling_utils.py
Adds conditional logic to compute MTP flag based on speculative decoding configuration and emit warning when enable_lm_head_tp_in_adp is enabled without MTP decoding support.
Deprecation Warnings & API Metadata
tensorrt_llm/llmapi/llm_args.py
Introduces warn_deprecated_fields validators in TrtLlmArgs and TorchLlmArgs classes. Updates field status metadata, marks several fields as deprecated (moe_cluster_parallel_size, fail_fast_on_attention_window_too_large), expands field documentation, and shifts iteration stats and other fields from prototype to beta status.
Enum Simplification
tensorrt_llm/mapping.py
Removes RING member from CpType enumeration, reducing supported context parallelism types to ULYSSES and HELIX.
API Stability References
tests/unittest/api_stability/references/llm.yaml, tests/unittest/api_stability/references_committed/llm.yaml
Updates API stability reference files to reflect deprecation of moe_cluster_parallel_size and enable_attention_dp, transitions several fields from prototype to beta status, and adds enable_attention_dp parameter to committed API surface.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The pull request description is almost entirely empty, containing only the template structure without any substantive details about the changes, their rationale, or test coverage. Fill in the Description section explaining what LLM API args were refined and why. Document relevant tests in Test Coverage section. Provide actual implementation details and rationale.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: refining LLM API arguments, which aligns with the actual modifications across llm_args.py, modeling_utils.py, and mapping.py.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/llmapi/llm_args.py (1)

2645-2677: ⚠️ Potential issue | 🟠 Major

Validate enable_lm_head_tp_in_adp against backend/MTP, not just ADP.

This validator only warns on missing enable_attention_dp, but the field description now says it also requires MTP speculative decoding. TrtLlmArgs.validate_speculative_config() never accepts MTP on the TensorRT path, so TrtLlmArgs(enable_lm_head_tp_in_adp=True, enable_attention_dp=True) is currently accepted even though its documented precondition can never be satisfied.

Suggested follow-up
     `@model_validator`(mode="after")
     def validate_parallel_config(self):
-        if self.enable_lm_head_tp_in_adp and not self.enable_attention_dp:
-            logger.warning(
-                "enable_lm_head_tp_in_adp has no effect without enable_attention_dp=True."
-            )
+        if self.enable_lm_head_tp_in_adp:
+            if not self.enable_attention_dp:
+                logger.warning(
+                    "enable_lm_head_tp_in_adp has no effect without enable_attention_dp=True."
+                )
+
+            is_mtp = isinstance(self.speculative_config, MTPDecodingConfig)
+            if self.backend != "pytorch" or not is_mtp:
+                logger.warning(
+                    "enable_lm_head_tp_in_adp is only effective on the PyTorch backend with MTP speculative decoding."
+                )
 
         if "moe_cluster_parallel_size" in self.model_fields_set:
             logger.warning(
                 "moe_cluster_parallel_size is deprecated and will be removed in a future release."
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/llmapi/llm_args.py` around lines 2645 - 2677, The validator in
validate_parallel_config currently only checks enable_attention_dp for
enable_lm_head_tp_in_adp; update it to also validate that the runtime/backend
and MTP support are available (so the flag is only allowed when the backend can
enable MTP/speculative decoding), e.g. add a check using the same conditions
used by TrtLlmArgs.validate_speculative_config (or call that validation) to
reject or warn when enable_lm_head_tp_in_adp=True but the TensorRT path/backend
does not support MTP, and ensure the warning/error message references
enable_lm_head_tp_in_adp, enable_attention_dp, and the backend/MTP requirement
so the behavior matches the documented precondition.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tensorrt_llm/llmapi/llm_args.py`:
- Around line 2645-2677: The validator in validate_parallel_config currently
only checks enable_attention_dp for enable_lm_head_tp_in_adp; update it to also
validate that the runtime/backend and MTP support are available (so the flag is
only allowed when the backend can enable MTP/speculative decoding), e.g. add a
check using the same conditions used by TrtLlmArgs.validate_speculative_config
(or call that validation) to reject or warn when enable_lm_head_tp_in_adp=True
but the TensorRT path/backend does not support MTP, and ensure the warning/error
message references enable_lm_head_tp_in_adp, enable_attention_dp, and the
backend/MTP requirement so the behavior matches the documented precondition.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f436d395-1508-4884-8d07-55350ce3a458

📥 Commits

Reviewing files that changed from the base of the PR and between 2578637 and 3e8e455.

📒 Files selected for processing (5)
  • tensorrt_llm/_torch/models/modeling_utils.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/mapping.py
  • tests/unittest/api_stability/references/llm.yaml
  • tests/unittest/api_stability/references_committed/llm.yaml
💤 Files with no reviewable changes (1)
  • tensorrt_llm/mapping.py

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38652 [ run ] triggered by Bot. Commit: 3e8e455 Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants