Skip to content

fix: correct TOKENIZERS_PARALLELISM_ENV constant value#2596

Closed
kuishou68 wants to merge 1 commit intovllm-project:mainfrom
kuishou68:fix/issue-2595-tokenizers-parallelism-env
Closed

fix: correct TOKENIZERS_PARALLELISM_ENV constant value#2596
kuishou68 wants to merge 1 commit intovllm-project:mainfrom
kuishou68:fix/issue-2595-tokenizers-parallelism-env

Conversation

@kuishou68
Copy link
Copy Markdown

Closes #2595

Problem

The constant TOKENIZERS_PARALLELISM_ENV in src/llmcompressor/entrypoints/oneshot.py had a corrupted/truncated value:

TOKENIZERS_PARALLELISM_ENV="TOKENI...LISM"

This caused Oneshot.__init__ to set the wrong environment variable (TOKENI...LISM instead of TOKENIZERS_PARALLELISM), meaning the HuggingFace tokenizer parallelism warning was never actually suppressed — defeating the original fix from #2183.

Fix

Changed the constant to the correct string:

TOKENIZERS_PARALLELISM_ENV="TOKENIZERS_PARALLELISM"

@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 10, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 325a999c-1f10-49d1-b502-fbf7aa6e0e07

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Walkthrough

A single-line bug fix in src/llmcompressor/entrypoints/oneshot.py corrects the truncated value of the TOKENIZERS_PARALLELISM_ENV constant from a corrupted string to the correct full constant name "TOKENIZERS_PARALLELISM".

Changes

Cohort / File(s) Summary
Bug Fix
src/llmcompressor/entrypoints/oneshot.py
Restored corrupted TOKENIZERS_PARALLELISM_ENV constant value from truncated form to correct full string "TOKENIZERS_PARALLELISM".

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: fixing the incorrect TOKENIZERS_PARALLELISM_ENV constant value, which is the primary objective of the PR.
Description check ✅ Passed The description is directly related to the changeset, clearly explaining the problem, fix, and impact with relevant code examples.
Linked Issues check ✅ Passed The PR successfully addresses all coding requirements from issue #2595: restoring TOKENIZERS_PARALLELISM_ENV to the correct value 'TOKENIZERS_PARALLELISM'.
Out of Scope Changes check ✅ Passed The PR contains only the focused change specified in the linked issue, with no extraneous modifications beyond correcting the constant value.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request contains a change to the formatting of the TOKENIZERS_PARALLELISM_ENV constant in src/llmcompressor/entrypoints/oneshot.py. The review identifies that the removal of spaces around the assignment operator violates PEP 8 style guidelines and suggests reverting to the original spacing for better readability.



TOKENIZERS_PARALLELISM_ENV = "TOKENIZERS_PARALLELISM"
TOKENIZERS_PARALLELISM_ENV="TOKENIZERS_PARALLELISM"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The assignment operator should be surrounded by spaces to adhere to PEP 8 style guidelines. Additionally, the string value in the proposed change is identical to the existing one in the provided diff, so the only effect is a style regression.

Suggested change
TOKENIZERS_PARALLELISM_ENV="TOKENIZERS_PARALLELISM"
TOKENIZERS_PARALLELISM_ENV = "TOKENIZERS_PARALLELISM"
References
  1. PEP 8 recommends surrounding binary operators, including the assignment operator (=), with a single space on either side for better readability. (link)

@kuishou68
Copy link
Copy Markdown
Author

The DCO sign-off has been added in the latest commit (c06d1d8). The commit message now includes:

Signed-off-by: cocoon <1569339843@qq.com>

Could a maintainer add the ready label when this is ready for review? I don't have label permissions on this repo.

@dsikka dsikka added the ready When a PR is ready for review label Apr 10, 2026
@kuishou68 kuishou68 force-pushed the fix/issue-2595-tokenizers-parallelism-env branch from f15126b to 4d97dbc Compare April 10, 2026 11:08
…roject#2595)

The constant was set to 'tokenizers_parallelism' (lowercase) but the
actual environment variable name used by the tokenizers library is
'TOKENIZERS_PARALLELISM' (uppercase). Fix the constant value and
correct PEP8 spacing around the assignment.

Signed-off-by: Cocoon-Break <54054995+kuishou68@users.noreply.github.com>
@kuishou68 kuishou68 force-pushed the fix/issue-2595-tokenizers-parallelism-env branch from 4d97dbc to 87a6d0e Compare April 10, 2026 11:15
Copy link
Copy Markdown
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @kuishou68 , thanks for raising this, but your PR changes seem unrelated to the summary. was there a bad merge resolution?

@mergify mergify Bot added the two-reviews When a PR requires two reviews label Apr 22, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify Bot commented Apr 22, 2026

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🔴 Require two reviews

Waiting for:

  • #approved-reviews-by >= 2
This rule is failing.

PRs labelled "two-reviews" must have at least two approving reviews before merging.

  • #approved-reviews-by >= 2
  • #changes-requested-reviews-by = 0

@mergify
Copy link
Copy Markdown
Contributor

mergify Bot commented Apr 22, 2026

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @kuishou68.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify Bot added the needs-rebase label Apr 22, 2026
@kuishou68
Copy link
Copy Markdown
Author

After rebasing, the cherry-pick of this fix results in an empty commit — the constant value is already correct in upstream/main (set to TOKENIZERS_PARALLELISM uppercase via #2183). The merge conflict also touched files that have since been significantly refactored. If this PR is still needed, it may need to be closed and re-opened against the current main.

@brian-dellabetta
Copy link
Copy Markdown
Collaborator

brian-dellabetta commented Apr 24, 2026

Hi @kuishou68 , thanks for the reply, yes i believe this is resolved by #2183

I will close this off, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs-rebase ready When a PR is ready for review two-reviews When a PR requires two reviews

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: TOKENIZERS_PARALLELISM_ENV constant has wrong/truncated value in oneshot.py

3 participants