Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the configurability of vLLM integrations by providing a dedicated mechanism to pass sampling-related keyword arguments directly through the configuration. This change allows for more granular control over generation parameters, aligning the sampler configuration with the existing engine configuration capabilities. The update propagates these new configuration options across the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
3675c58 to
b418aa7
Compare
There was a problem hiding this comment.
Code Review
This pull request adds support for passing arbitrary sampler keyword arguments to the vLLM sampler, which increases its flexibility. The changes are well-structured, introducing a sampler_kwargs field in VllmConfig and plumbing it through to the sampler's __call__ method. My review includes a couple of suggestions for improvement in tunix/generate/vllm_sampler.py related to logging verbosity and exception handling to better align with the repository's style guide for robustness and maintainability.
b418aa7 to
225d791
Compare
wang2yn84
left a comment
There was a problem hiding this comment.
Can you add a test for this?
tunix/generate/vllm_sampler.py
Outdated
| ) | ||
|
|
||
| # vLLM sampler args that can be directly passed in without additional processing, e.g. temperature, stop etc. | ||
| sampler_kwargs: Dict[str, Any] = dataclasses.field(default_factory=dict) |
There was a problem hiding this comment.
Can we rename to sampling_kwargs?
tunix/rl/rollout/base_rollout.py
Outdated
| rollout_vllm_kwargs: dict[str, Any] = dataclasses.field(default_factory=dict) | ||
|
|
||
| # Additional keyword arguments forwarded directly to the vLLM sampler. | ||
| rollout_vllm_sampler_kwargs: dict[str, Any] = dataclasses.field(default_factory=dict) |
225d791 to
9b9525f
Compare
I added a couple of unit tests. LMK what you think. |
tests/generate/vllm_sampler_test.py
Outdated
| self.repo_id, enable_lora=self.enable_lora | ||
| ) | ||
|
|
||
| base_utils.show_hbm_usage("After loading tunix model") |
There was a problem hiding this comment.
Let's remove this in the test?
There was a problem hiding this comment.
Sounds good! Will remove!
tests/generate/vllm_sampler_test.py
Outdated
| self.repo_id, enable_lora=self.enable_lora | ||
| ) | ||
|
|
||
| base_utils.show_hbm_usage("After loading tunix model") |
tests/generate/vllm_sampler_test.py
Outdated
|
|
||
| def test_vllm_sampler_sampling_kwargs(self): | ||
| """Test that sampling kwargs are correctly applied to sampling_params.""" | ||
| tunix_model, _ = self.load_llama3_model( |
There was a problem hiding this comment.
Since we are not testing the correctness of output, shall we put dummy_model_creator that Tunix offered instead of the real HF model?
9b9525f to
61b3dec
Compare
61b3dec to
a581afa
Compare
a581afa to
cc6c05e
Compare
cc6c05e to
f5f066d
Compare
f5f066d to
70cce70
Compare
a5efcd7 to
739655b
Compare
tunix/generate/vllm_sampler.py
Outdated
| sampling_params.seed = seed | ||
|
|
||
| if kwargs: | ||
| self.config.sampling_kwargs.update(kwargs) |
There was a problem hiding this comment.
This one should be sampling_kwargs since its not going to the llm constructor but to the sampling_params object instead
There was a problem hiding this comment.
I mean the kwargs, where does it come from?
There was a problem hiding this comment.
Those are the kwargs that are coming from the the generate() method. They can be passed in dynamically.
tunix/generate/vllm_sampler.py
Outdated
| if self.config.sampling_kwargs: | ||
| try: | ||
| sampling_params.update(**kwargs) | ||
| logging.log_first_n( |
There was a problem hiding this comment.
The vllm config init will not be called more than once?
There was a problem hiding this comment.
No - I think it will only be called once. The kwargs provided to the call method could change theoretically which is why we want keep the update call here
There was a problem hiding this comment.
Yea I mean if it will only be called once, maybe we don't need to use logging.log_first_n?
There was a problem hiding this comment.
Sounds good! Will update this.
739655b to
43271ba
Compare
43271ba to
c389929
Compare
Add support for sampling kwargs to be passed in via vLLMConfig. This mirrors the recent engine_kwargs argument added to vLLMConfig.
Checklist