Skip to content

Pearl#13551

Draft
zhaoyangwang-nvidia wants to merge 2 commits intoNVIDIA:mainfrom
zhaoyangwang-nvidia:pearl
Draft

Pearl#13551
zhaoyangwang-nvidia wants to merge 2 commits intoNVIDIA:mainfrom
zhaoyangwang-nvidia:pearl

Conversation

@zhaoyangwang-nvidia
Copy link
Copy Markdown
Collaborator

@coderabbitai summary

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Signed-off-by: ZhaoyangWang <zhaoyangw@nvidia.com>
  Implement CPU-initiated libibverbs RDMA transport to offload the draft
  model to a separate GPU, replacing local draft inference with a remote
  RDMA peer. The target model (TRT-LLM) writes accumulated output tokens
  to the draft server via RDMA Write; the draft server returns speculative
  tokens via RDMA Write back.

  Key changes:
  - rdma_draft_offload.py: ibverbs RC QP client with GPUDirect MR registration,
    QP state machine (RESET->INIT->RTR->RTS), and per-round request/response
  - rdma_draft_protocol.py: fixed-size binary protocol (32B header + 256B tokens,
    MAGIC-checked, 4096B total) for target<->draft RDMA buffers
  - draft_target.py: RDMA offload path in DraftTargetOneModelWorker.forward(),
    output token history accumulation, warmup pre-connection
  - llm_args.py: DraftTargetDecodingConfig RDMA fields; allow speculative_model=None
    when draft_offload_enabled=True
  - model_loader.py: skip draft weight loading when draft_offload_enabled
  - modeling_speculative.py: skip draft model instantiation; thread is_warmup
  - _util.py: skip separate draft KV cache when draft_offload_enabled
  - model_engine.py: pass is_warmup flag through to model forward inputs
  - .gitignore: ignore cmake-created symlinks deep_ep/deep_gemm/flash_mla

Signed-off-by: ZhaoyangWang <zhaoyangw@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant