Skip to content

Ci test#1854

Closed
autopear wants to merge 3 commits intovllm-project:mainfrom
autopear:ci_test
Closed

Ci test#1854
autopear wants to merge 3 commits intovllm-project:mainfrom
autopear:ci_test

Conversation

@autopear
Copy link
Collaborator

Pull Request Description

[Please provide a clear and concise description of your changes here]

Related Issues

Resolves: #[Insert issue number(s)]

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

Qizhong Mao and others added 2 commits December 12, 2025 14:49
vLLM encodes event batches as a timestamp plus a list of events, each
event is also encoded as a list instead of map.

Signed-off-by: Qizhong Mao <qizhong.mao@bytedance.com>
Signed-off-by: ae86zhizhi <550149470@qq.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @autopear, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the KV cache event synchronization mechanism to ensure full compatibility with vLLM's MessagePack encoding format. It involves updating the underlying MessagePack library, redesigning event data structures for array-like serialization, and overhauling the encoding and decoding logic. These changes streamline data exchange, enhance robustness, and maintain consistency with the vLLM ecosystem.

Highlights

  • MessagePack Library Migration: The MessagePack serialization library has been upgraded from github.com/shamaton/msgpack/v2 to github.com/vmihailenco/msgpack/v5, along with an update to github.com/pebbe/zmq4.
  • vLLM MessagePack Format Alignment: The KV cache event types and their MessagePack encoding/decoding logic have been refactored to align with vLLM's array-like serialization format, improving compatibility and data consistency.
  • Event Structure Enhancements: The BlockStoredEvent now uses [][]byte for TokenIDs (representing big-endian uint32s), includes a ParentBlockHash field, and renames PodName to SourcePod. Similar naming changes were applied to BlockRemovedEvent.
  • Decoder Refactoring: The DecodeEventBatch function has been completely rewritten to parse the new array-based MessagePack structure, including robust type conversion helpers and direct assignment of model and pod metadata during decoding.
  • Documentation and Test Updates: Relevant documentation (kv-event-sync.rst, README.md) and unit/integration tests (msgpack_decoder_test.go, zmq_client_test.go) have been updated to reflect the new event structures and MessagePack handling.
Ignored Files
  • Ignored by pattern: .github/workflows/** (2)
    • .github/workflows/complete-testing.yml
    • .github/workflows/nightly-performance.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a major refactoring of the KV cache event synchronization to align with a new array-based msgpack format from vLLM. This includes switching the msgpack library and rewriting the event definitions, encoder, and decoder. The changes are extensive and simplify some parts of the client logic.

My review focuses on improving the robustness and testability of the new implementation. I've identified a critical issue where a panic could crash the service on malformed data and suggest replacing it with proper error handling. I also recommend enhancing the test suite to cover error paths in the new decoder and removing some leftover debug statements from the tests.

Comment on lines +410 to +427
func convertTokenIDs(tokenIDs []uint32, blockSize int) [][]byte {
if blockSize <= 0 {
return nil
}

result := make([]int64, 0, len(arr))
for i, item := range arr {
val, err := parseInt64(item)
if err != nil {
return nil, fmt.Errorf("failed to parse element at index %d: %w", i, err)
}
result = append(result, val)
if len(tokenIDs)%blockSize != 0 {
panic(fmt.Sprintf("tokenIDs len=%d not divisible by blockSize=%d", len(tokenIDs), blockSize))
}
return result, nil
}

func parseInt32Array(v interface{}) ([]int32, error) {
arr, ok := v.([]interface{})
if !ok {
return nil, fmt.Errorf("expected array, got %T", v)
numBlocks := len(tokenIDs) / blockSize
result := make([][]byte, numBlocks)

for i := 0; i < numBlocks; i++ {
start := i * blockSize
end := start + blockSize
result[i] = tokenIDsToBytes(tokenIDs[start:end])
}
return result
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Using panic on invalid data from an external source is risky as it can crash the service. It's much safer to return an error and let the caller handle it gracefully. This change refactors the function to return an error for invalid inputs, such as when the number of tokens is not divisible by blockSize.

After applying this suggestion, you'll need to update the call site in parseEventArray (line 134) to handle the returned error, for example:

tokens, err := convertTokenIDs(tokenIDs, blockSize)
if err != nil {
    return nil, fmt.Errorf("failed to convert token IDs: %w", err)
}
func convertTokenIDs(tokenIDs []uint32, blockSize int) ([][]byte, error) {
	if blockSize <= 0 {
		if len(tokenIDs) > 0 {
			return nil, fmt.Errorf("invalid blockSize %d with %d tokens", blockSize, len(tokenIDs))
		}
		return nil, nil
	}
	if len(tokenIDs)%blockSize != 0 {
		return nil, fmt.Errorf("tokenIDs len=%d not divisible by blockSize=%d", len(tokenIDs), blockSize)
	}

	numBlocks := len(tokenIDs) / blockSize
	result := make([][]byte, numBlocks)

	for i := 0; i < numBlocks; i++ {
		start := i * blockSize
		end := start + blockSize
		result[i] = tokenIDsToBytes(tokenIDs[start:end])
	}
	return result, nil
}

assert.Equal(t, now.Add(500*time.Millisecond), event.Timestamp)
},
},
func TestBlockStoredEventEncodeDecode(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The new decoder implementation is quite complex. While the tests cover the success paths well, they seem to be missing coverage for several important error scenarios in DecodeEventBatch. It would be beneficial to add tests for cases like:

  • An invalid batch format (e.g., not an array of 2 elements).
  • A non-float timestamp.
  • An event that is not a msgpack array.
  • An event with an unknown tag.
  • A BlockStored event with too few fields.

Additionally, the new parsing helper functions (parseUint32, parseInt64, etc.) are critical for robustness but lack direct unit tests. Adding dedicated tests for these helpers would help ensure they correctly handle various numeric types and edge cases.

Comment on lines +250 to +253
fmt.Println("Type:", receivedEvent.Type)
fmt.Println("BlockHashes:", receivedEvent.BlockHashes)
fmt.Println("TokenIDs:", receivedEvent.TokenIDs)
fmt.Println("Timestamp:", receivedEvent.Timestamp)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These fmt.Println statements appear to be leftover from debugging. They should be removed to keep the test output clean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants