Skip to content

[rl] Refactor rollout decoding across vLLM and Levanter#5035

Open
taivu1998 wants to merge 6 commits intomarin-community:mainfrom
taivu1998:tdv/rl-pr1-shared-decoding
Open

[rl] Refactor rollout decoding across vLLM and Levanter#5035
taivu1998 wants to merge 6 commits intomarin-community:mainfrom
taivu1998:tdv/rl-pr1-shared-decoding

Conversation

@taivu1998
Copy link
Copy Markdown

Unify RL rollout decoding around a shared configuration, record applied decoding on rollouts, and keep curriculum-owned rollout policy separate from backend fallbacks. Expand the vLLM surface, tighten Levanter validation, and add real Levanter top_p support through the native inference engine. Validated with the non-slow RL suite and focused Levanter inference tests.

@taivu1998 taivu1998 marked this pull request as ready for review April 22, 2026 09:01
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fd0345d2ea

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread lib/levanter/src/levanter/layers/sampler.py Outdated
Unify RL rollout decoding behind a shared DecodingConfig and record applied decoding on rollouts so generation behavior stays explicit and reproducible. Expand the vLLM path to honor the richer decode surface, and make the Levanter wrapper fail loudly when a requested knob is not actually supported.
Make the Levanter RL inference wrapper explicit about the decoding fields it actually honors today. This keeps the shared decoding contract honest while preserving stop-token fallback behavior through focused regression coverage.
Move train decoding defaults into curriculum construction and leave the vLLM sampling config as backend fallback only. This keeps lesson config as the clear runtime source of truth and avoids implying that builder fields control live rollout counts or top-k behavior.
Separate vLLM runtime settings from fallback sampling defaults so rollout policy stays distinct from engine tuning. This keeps future vLLM performance knobs from re-mixing with decode semantics after the PR1-PR4 cleanup.
Carry top_p through the Levanter inference request path, decode state, and sampler so Marin RL can use nucleus sampling on the native backend. This keeps the Levanter wrapper honest while closing the highest-value remaining decode-surface gap after the earlier decoding cleanup.
@taivu1998 taivu1998 force-pushed the tdv/rl-pr1-shared-decoding branch from 9824578 to 50fdbe2 Compare April 25, 2026 01:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant