Skip to content

Fix Speculative decoding test cases#294

Merged
tarukumar merged 2 commits intoopendatahub-io:mainfrom
vaibhavjainwiz:fix_spec_dec
May 8, 2025
Merged

Fix Speculative decoding test cases#294
tarukumar merged 2 commits intoopendatahub-io:mainfrom
vaibhavjainwiz:fix_spec_dec

Conversation

@vaibhavjainwiz
Copy link
Copy Markdown
Member

@vaibhavjainwiz vaibhavjainwiz commented May 6, 2025

Description

How Has This Been Tested?

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Tests
    • Updated test configurations to use a single JSON-formatted argument for speculative decoding settings, replacing multiple separate command-line options with a consolidated configuration string. This streamlines how test parameters are provided for speculative decoding scenarios.

@vaibhavjainwiz vaibhavjainwiz requested a review from a team as a code owner May 6, 2025 11:20
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented May 6, 2025

Walkthrough

The command-line argument structure for configuring speculative decoding in two test files was updated. Multiple separate flags for speculative decoding parameters were replaced with a single --speculative_config flag that accepts a JSON string containing all relevant configuration options.

Changes

File(s) Change Summary
tests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_draft.py Replaced individual speculative decoding flags (--speculative-model, --num-speculative-tokens) with --speculative_config JSON argument.
tests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_ngram.py Consolidated multiple speculative decoding flags into a single --speculative_config JSON argument.

Sequence Diagram(s)

sequenceDiagram
    participant Tester
    participant ModelServingRuntime

    Tester->>ModelServingRuntime: Start with --speculative_config '{...}'
    ModelServingRuntime->>ModelServingRuntime: Parse JSON config for model, tokens, etc.
    ModelServingRuntime->>Tester: Run test with provided speculative decoding parameters
Loading

Poem

In the warren of code, flags once ran free,
Now bundled in JSON, as neat as can be.
Speculative dreams, in one string reside,
No more stray arguments, nowhere to hide.
With a hop and a skip, the tests all agree—
Configuration is simpler, for you and for me! 🐇

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions
Copy link
Copy Markdown

github-actions bot commented May 6, 2025

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
Supported labels

{'/hold', '/lgtm', '/verified', '/wip'}

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_draft.py (1)

20-21: Good approach to standardize speculative decoding configuration

This change aligns well with the update in the ngram test file, adopting a consistent approach by using a single --speculative_config parameter with a JSON string. The configuration properly includes the model path and the number of speculative tokens.

One minor nitpick: Consider standardizing the JSON formatting style between files - the draft file has extra spaces in { "model"... while the ngram file doesn't.

-    '{ "model": "/mnt/models/granite-7b-instruct-accelerator", "num_speculative_tokens": 5 }',
+    '{"model": "/mnt/models/granite-7b-instruct-accelerator", "num_speculative_tokens": 5}',
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6bc91d0 and c372bed.

📒 Files selected for processing (2)
  • tests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_draft.py (1 hunks)
  • tests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_ngram.py (1 hunks)
🔇 Additional comments (1)
tests/model_serving/model_runtime/vllm/speculative_decoding/test_granite_7b_lab_ngram.py (1)

20-21: Good refactoring to consolidate speculative decoding configuration

The change to use a single --speculative_config parameter with a JSON string instead of multiple separate flags is a good improvement. It makes the configuration more organized and maintainable by grouping related parameters together.

The JSON format is valid and includes all necessary parameters for the ngram-based speculative decoding: model type, number of tokens, and prompt lookup maximum.

@tarukumar tarukumar merged commit 958a56c into opendatahub-io:main May 8, 2025
10 checks passed
@github-actions
Copy link
Copy Markdown

github-actions bot commented May 8, 2025

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants