Skip to content

feat: enable quantization support for vLLM backend#374

Open
yashasviyadav30 wants to merge 1 commit intokubeedge:mainfrom
yashasviyadav30:fix/enable-quantization-vllm-clean
Open

feat: enable quantization support for vLLM backend#374
yashasviyadav30 wants to merge 1 commit intokubeedge:mainfrom
yashasviyadav30:fix/enable-quantization-vllm-clean

Conversation

@yashasviyadav30
Copy link
Copy Markdown

/kind feature

What this PR does / why we need it:

The quantization parameter in vllm_llm.py was commented out with a TODO (# TODO need to align with vllm API). Traced the issue — BaseLLM._parse_kwargs() wasn't parsing quantization from kwargs, so self.quantization stayed undefined.

Fixed both sides:

  • Parse quantization in BaseLLM._parse_kwargs() (defaults to None)
  • Conditionally pass it to vLLM's LLM() only when set, using a dict-based approach instead of hardcoded args

Backward compatible — existing configs without quantization work exactly as before. Users who want quantized inference can now add quantization: bitsandbytes (or awq, gptq) to their config and it just works.

Cross-checked with vLLM docs and the PIPL example which already uses quantization in its configs.

Which issue(s) this PR fixes:

Fixes #372

Signed-off-by: YASHASVIYADAV30 <yashasviydv30@gmail.com>
@kubeedge-bot kubeedge-bot added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 1, 2026
@kubeedge-bot
Copy link
Copy Markdown
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: yashasviyadav30
To complete the pull request process, please assign moorezheng after the PR has been reviewed.
You can assign the PR to them by writing /assign @moorezheng in a comment when ready.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces support for model quantization within the vLLM backend, addressing a previously commented-out feature. It ensures that the quantization parameter is properly recognized and passed to the vLLM library, allowing users to leverage quantized models for improved performance and reduced memory footprint without breaking existing configurations.

Highlights

  • Quantization Support for vLLM: Enabled the ability to specify quantization methods (e.g., 'bitsandbytes', 'awq', 'gptq') for vLLM models, allowing for more efficient inference.
  • Parameter Parsing Fix: Resolved an issue where the quantization parameter was not being correctly parsed and stored within the BaseLLM class.
  • Conditional vLLM Initialization: Modified the vLLM LLM constructor call to conditionally include the quantization parameter only when it is explicitly set, ensuring backward compatibility.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/cloud-edge-collaborative-inference-for-llm/testalgorithms/query-routing/models/base_llm.py
    • Added quantization to the _parse_kwargs method's documentation.
    • Initialized self.quantization from kwargs within the _parse_kwargs method.
  • examples/cloud-edge-collaborative-inference-for-llm/testalgorithms/query-routing/models/vllm_llm.py
    • Refactored the LLM constructor call to use a dictionary for parameters.
    • Added a conditional check to include the quantization parameter in the LLM constructor arguments if it is defined.
Activity
  • The author identified and resolved a TODO comment related to aligning the quantization parameter with the vLLM API.
  • The author fixed an issue where BaseLLM._parse_kwargs() was not correctly parsing the quantization parameter.
  • The author ensured backward compatibility for existing configurations that do not specify quantization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@kubeedge-bot kubeedge-bot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Mar 1, 2026
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully enables quantization support for the vLLM backend. The changes are well-implemented, parsing the quantization parameter in the base class and conditionally applying it when initializing the vLLM model. This is a great feature addition. My review includes a few suggestions to improve flexibility and security by making some previously hardcoded parameters in the vLLM initialization configurable. These are not regressions introduced by this PR but rather opportunities for improvement in the code touched by your refactoring.


llm_kwargs = {
"model": model,
"trust_remote_code": True,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Hardcoding trust_remote_code=True can pose a security risk, as it allows arbitrary code execution from the model's repository. It's highly recommended to make this a configurable parameter that defaults to False. Users should explicitly enable it only when they trust the source of the model.

llm_kwargs = {
"model": model,
"trust_remote_code": True,
"dtype": "float16",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The dtype is hardcoded to "float16". While this is a common default, some models perform better with "bfloat16" (if supported by the hardware), and certain quantization methods might have specific dtype requirements. To improve flexibility, consider making this a configurable parameter, which could be parsed in BaseLLM with a default of "auto" or "float16".

"dtype": "float16",
"tensor_parallel_size": self.tensor_parallel_size,
"gpu_memory_utilization": self.gpu_memory_utilization,
"max_model_len": 8192
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The max_model_len is hardcoded to 8192. This could be restrictive for models with larger context windows or inefficient for models with smaller ones. To improve flexibility, consider making this a configurable parameter. You could add max_model_len to _parse_kwargs in base_llm.py with a sensible default, and then use self.max_model_len here.

@yashasviyadav30
Copy link
Copy Markdown
Author

Those were already hardcoded in the original code, I kept them as-is to keep this PR focused on just the quantization support. Can open a follow-up for making them configurable.

@yashasviyadav30 yashasviyadav30 changed the title feat: enable quantization support for vLLM backend Feat: enable quantization support for vLLM backend Mar 2, 2026
@yashasviyadav30 yashasviyadav30 changed the title Feat: enable quantization support for vLLM backend feat: enable quantization support for vLLM backend Mar 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kind/feature Categorizes issue or PR as related to a new feature. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Quantization parameter not working in vLLM backend (cloud-edge LLM example)

2 participants