Description
Checklist
- 1. I have searched related issues but cannot get the expected help.
- 2. The bug has not been fixed in the latest version.
- 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- 5. Please use English, otherwise it will be closed.
Describe the bug
I’m running QwQ-32B on my sglang server without any modifications.
However, I don’t want the model to perform function calling (i.e., output the <tool_call>
and </tool_call>
tokens).
When I try to use logit_bias
with the OpenAI API to prevent this, it doesn’t produce the expected results.
Reproduction
Here’s a test case to illustrate the issue:
My testing codes:
>>> tokenizer = AutoTokenizer.from_pretrained(model_path) # QwQ-32B
>>> tokenizer("name")
{'input_ids': [606], 'attention_mask': [1]}
prompt = "What is your name?"
client = OpenAI(api_key="EMPTY", base_url=f"http://localhost:9999/v1/") # setting sglang server port 9999
completion = client.chat.completions.create(
model="QwQ-32B",
messages=[{"role": "user", "content": prompt}],
logit_bias={'606': -100}, # 606 is the token id for "name"
)
print(completion)
However, despite applying a strong logit bias to suppress the word "name", the model still outputs:
ChatCompletion(id='c6b662dac8bd41b29856533763097729', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Okay, the user is asking, "What is your name?" Let me start by recalling my own introduction. My name is Qwen. I should make sure to state that clearly. But maybe they want a bit more detail. Let me check the guidelines. I should mention that I\'m a large language model developed by Alibaba Cloud. That\'s important for context. Also, perhaps I should keep the response friendly and concise. Let me put that together.\n\nWait, do they need any additional information? The question is straightforward, so maybe just the name and the developer. I shouldn\'t overcomplicate it. Let me confirm if there\'s anything else. No, I think that\'s sufficient. Alright, I\'ll respond with my name and the company. That should answer their question properly.\n</think>\n\nMy name is Qwen. I am a large language model developed by Alibaba Cloud. How can I assist you today?', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None, reasoning_content=None), matched_stop=151645)], created=1746873290, model='QwQ-32B', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=185, prompt_tokens=15, total_tokens=200, completion_tokens_details=None, prompt_tokens_details=None))
This suggests that logit_bias
may not be respected in the sglang server or in the way it handles OpenAI-compatible requests.
Environment
Python: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA A800-SXM4-80GB
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.1, V12.1.105
CUDA Driver Version: 535.54.03
PyTorch: 2.5.1+cu124
sglang: 0.4.4.post2
sgl_kernel: 0.0.5.post3
flashinfer: Module Not Found
triton: 3.1.0
transformers: 4.50.0
torchao: 0.9.0
numpy: 2.2.4
aiohttp: 3.11.14
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.29.3
interegular: 0.3.3
modelscope: 1.24.0
orjson: 3.10.16
outlines: 0.1.11
packaging: 24.2
psutil: 7.0.0
pydantic: 2.11.0
multipart: Module Not Found
zmq: Module Not Found
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.16
openai: 1.69.0
tiktoken: 0.9.0
anthropic: 0.49.0
litellm: 1.64.1
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV8 NV8 NV8 NV8 NV8 NV8 NV8 PHB PHB PHB PHB 0-47 N/A N/A
GPU1 NV8 X NV8 NV8 NV8 NV8 NV8 NV8 PHB PHB PHB PHB 0-47 N/A N/A
GPU2 NV8 NV8 X NV8 NV8 NV8 NV8 NV8 PHB PHB PHB PHB 0-47 N/A N/A
GPU3 NV8 NV8 NV8 X NV8 NV8 NV8 NV8 PHB PHB PHB PHB 0-47 N/A N/A
GPU4 NV8 NV8 NV8 NV8 X NV8 NV8 NV8 PHB PHB PHB PHB 0-47 N/A N/A
GPU5 NV8 NV8 NV8 NV8 NV8 X NV8 NV8 PHB PHB PHB PHB 0-47 N/A N/A
GPU6 NV8 NV8 NV8 NV8 NV8 NV8 X NV8 PHB PHB PHB PHB 0-47 N/A N/A
GPU7 NV8 NV8 NV8 NV8 NV8 NV8 NV8 X PHB PHB PHB PHB 0-47 N/A N/A
NIC0 PHB PHB PHB PHB PHB PHB PHB PHB X PHB PHB PHB
NIC1 PHB PHB PHB PHB PHB PHB PHB PHB PHB X PHB PHB
NIC2 PHB PHB PHB PHB PHB PHB PHB PHB PHB PHB X PHB
NIC3 PHB PHB PHB PHB PHB PHB PHB PHB PHB PHB PHB X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
ulimit soft: 1048576
Metadata
Metadata
Assignees
Labels
No labels