[TRTLLM-10673][chore] refine some LLM API args#12135
[TRTLLM-10673][chore] refine some LLM API args#12135QiJune wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
|
/bot run |
📝 WalkthroughWalkthroughThis pull request introduces deprecation warnings for legacy configuration fields across the LLM API layer, removes an unused enum member, and adds validation logic for MTP speculative decoding configuration. Status metadata is updated in API stability references to reflect new field statuses. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tensorrt_llm/llmapi/llm_args.py (1)
2645-2677:⚠️ Potential issue | 🟠 MajorValidate
enable_lm_head_tp_in_adpagainst backend/MTP, not just ADP.This validator only warns on missing
enable_attention_dp, but the field description now says it also requires MTP speculative decoding.TrtLlmArgs.validate_speculative_config()never accepts MTP on the TensorRT path, soTrtLlmArgs(enable_lm_head_tp_in_adp=True, enable_attention_dp=True)is currently accepted even though its documented precondition can never be satisfied.Suggested follow-up
`@model_validator`(mode="after") def validate_parallel_config(self): - if self.enable_lm_head_tp_in_adp and not self.enable_attention_dp: - logger.warning( - "enable_lm_head_tp_in_adp has no effect without enable_attention_dp=True." - ) + if self.enable_lm_head_tp_in_adp: + if not self.enable_attention_dp: + logger.warning( + "enable_lm_head_tp_in_adp has no effect without enable_attention_dp=True." + ) + + is_mtp = isinstance(self.speculative_config, MTPDecodingConfig) + if self.backend != "pytorch" or not is_mtp: + logger.warning( + "enable_lm_head_tp_in_adp is only effective on the PyTorch backend with MTP speculative decoding." + ) if "moe_cluster_parallel_size" in self.model_fields_set: logger.warning( "moe_cluster_parallel_size is deprecated and will be removed in a future release." )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tensorrt_llm/llmapi/llm_args.py` around lines 2645 - 2677, The validator in validate_parallel_config currently only checks enable_attention_dp for enable_lm_head_tp_in_adp; update it to also validate that the runtime/backend and MTP support are available (so the flag is only allowed when the backend can enable MTP/speculative decoding), e.g. add a check using the same conditions used by TrtLlmArgs.validate_speculative_config (or call that validation) to reject or warn when enable_lm_head_tp_in_adp=True but the TensorRT path/backend does not support MTP, and ensure the warning/error message references enable_lm_head_tp_in_adp, enable_attention_dp, and the backend/MTP requirement so the behavior matches the documented precondition.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tensorrt_llm/llmapi/llm_args.py`:
- Around line 2645-2677: The validator in validate_parallel_config currently
only checks enable_attention_dp for enable_lm_head_tp_in_adp; update it to also
validate that the runtime/backend and MTP support are available (so the flag is
only allowed when the backend can enable MTP/speculative decoding), e.g. add a
check using the same conditions used by TrtLlmArgs.validate_speculative_config
(or call that validation) to reject or warn when enable_lm_head_tp_in_adp=True
but the TensorRT path/backend does not support MTP, and ensure the warning/error
message references enable_lm_head_tp_in_adp, enable_attention_dp, and the
backend/MTP requirement so the behavior matches the documented precondition.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f436d395-1508-4884-8d07-55350ce3a458
📒 Files selected for processing (5)
tensorrt_llm/_torch/models/modeling_utils.pytensorrt_llm/llmapi/llm_args.pytensorrt_llm/mapping.pytests/unittest/api_stability/references/llm.yamltests/unittest/api_stability/references_committed/llm.yaml
💤 Files with no reviewable changes (1)
- tensorrt_llm/mapping.py
|
PR_Github #38652 [ run ] triggered by Bot. Commit: |
Summary by CodeRabbit
New Features
Bug Fixes
Deprecations
Documentation
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.