Skip to content

[BugFix][DataProcessor] Force top_k=1 for greedy decoding when temperature=0#6748

Open
gongweibao wants to merge 2 commits intoPaddlePaddle:developfrom
gongweibao:pr/greedy-topk-fix
Open

[BugFix][DataProcessor] Force top_k=1 for greedy decoding when temperature=0#6748
gongweibao wants to merge 2 commits intoPaddlePaddle:developfrom
gongweibao:pr/greedy-topk-fix

Conversation

@gongweibao
Copy link
Collaborator

@gongweibao gongweibao commented Mar 10, 2026

Motivation

When temperature=0 (greedy decoding), the current code only sets temperature to a small epsilon (1e-06). However, this is insufficient — the sampling kernel may still pick non-top-1 tokens due to floating-point noise in the softmax output. This causes non-deterministic outputs for greedy decoding requests.

This PR explicitly sets top_k=1 alongside the temperature epsilon to guarantee argmax behavior.

Modifications

  • fastdeploy/engine/engine.py: Add sampling_params.top_k = 1 when temperature ≈ 0.
  • fastdeploy/input/text_processor.py: Add top_k=1 in both process_request and _process_request paths.
  • fastdeploy/input/v1/text_processor.py: Same as above for v1 processor.
  • fastdeploy/input/ernie4_5_processor.py: Same as above for Ernie4.5 processor.
  • tests/input/test_text_processor.py: Add top_k=1 assertion in existing greedy decoding tests.

Usage or Command

No new usage. Existing requests with temperature=0 will now automatically use top_k=1 for true greedy decoding.

Accuracy Tests

This is a sampling parameter fix, not a kernel/model change. The fix ensures greedy decoding always selects the top-1 token, which is the expected behavior when temperature=0.

Checklist

  • Add at least a tag in the PR title: [BugFix], [Engine], [DataProcessor]
  • Format your code, run pre-commit before commit.
  • Add unit tests: tests/input/test_text_processor.py updated.
  • Provide accuracy results: N/A (sampling parameter fix, not model output change).
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch. (N/A — targeting develop)

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


gongweibao seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@paddle-bot
Copy link

paddle-bot bot commented Mar 10, 2026

Thanks for your contribution!

When temperature is set to 0 (greedy decoding), only setting temperature
to a small epsilon is insufficient — the sampling kernel may still pick
non-top-1 tokens. Explicitly set top_k=1 in all processors to guarantee
argmax behavior.

Additionally, add argmax fast-path in top_k_top_p_sampling() under
FD_DETERMINISTIC_MODE to handle non-rejection sampling backends that
ignore top_k parameter.
@gongweibao gongweibao changed the title [BugFix][Engine][DataProcessor] Force top_k=1 for greedy decoding when temperature=0 [BugFix][DataProcessor] Force top_k=1 for greedy decoding when temperature=0 Mar 10, 2026
@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 82.60870% with 4 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@28f7727). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...executor/layers/sample/ops/top_k_top_p_sampling.py 80.00% 1 Missing and 2 partials ⚠️
fastdeploy/input/v1/text_processor.py 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #6748   +/-   ##
==========================================
  Coverage           ?   71.94%           
==========================================
  Files              ?      392           
  Lines              ?    53903           
  Branches           ?     8476           
==========================================
  Hits               ?    38779           
  Misses             ?    12348           
  Partials           ?     2776           
Flag Coverage Δ
GPU 71.94% <82.60%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@gongweibao gongweibao requested review from gongshaotian and jiangjiajun and removed request for gongshaotian March 10, 2026 14:27
@gongshaotian gongshaotian self-assigned this Mar 11, 2026
top_k=1 → argmax is a correctness optimization, not deterministic-specific.
Remove the FD_DETERMINISTIC_MODE guard so all-greedy fast-path and
mixed-batch override work unconditionally.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants