[FEAT] Add replay from trace strategy#620
[FEAT] Add replay from trace strategy#620VincentG1234 wants to merge 5 commits intovllm-project:mainfrom
Conversation
008633f to
a66034b
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
Add trace replay capability to GuideLLM for reproducing real-world request patterns from trace files. This enables time-based request rate replay and synthetic prompt generation matching trace token counts. - Add TraceReplayStrategy for scheduling requests at precise timestamps - Add ReplayProfile for configuring trace-based benchmarking - Add TraceSyntheticDatasetDeserializer for generating prompts from traces - Support max_requests truncation to limit trace length This is a minimal implementation to address issue 597. Full Mooncake format support, E2E tests, and documentation will follow in subsequent PRs. Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
Signed-off-by: Vincent Gimenes <vincent.gimenes@gmail.com>
7f893fb to
780be20
Compare
|
It will be great to get an example of "How to get the JSONL" because i don't find solutions in litellm for example. |
|
Yeah that’s true, most frameworks won’t produce this exact JSONL directly. That’s kind of intentional. The idea here is to define a minimal, framework-agnostic canonical replay format, not something tied to a specific tracing stack. In practice, the required fields already exist almost everywhere (timestamp, input token count, output token count), just under slightly different names, so a small mapping step is usually enough. I agree it’s not the best UX on its own, but it felt like the right minimal base for the feature. Then we can iterate on top of it with helpers / converters for common sources like LiteLLM or Langfuse. And we can extend it later (e.g. optional prompt field, multiple timestamp formats, richer metadata) without breaking the core idea. But happy to adjust the direction if maintainers prefer something more opinionated or integrated from the start. |
Summary
replaybenchmarking strategy that reproduces real-world request patterns from trace log files (.jsonl)max_requestsandmax_secondscli options to limit the number of requests processed from a traceMotivation
This change addresses issue #597 by enabling users to benchmark their vLLM servers using real production traces. Instead of synthetic load patterns, users can now replay exact request arrival times and token distributions from their actual workloads for more realistic performance testing.
Changes
TraceReplayStrategyscheduler strategy for timestamp-based request dispatchingReplayProfileclass for configuring trace-based benchmarking parametersTraceSyntheticDatasetDeserializerto generate prompts matching trace input/output lengthsTraceReaderutility for reading .jsonl trace files with timestamp, input_length, output_length fieldsEntrypointto handle replay profile and dataset configurationmax_requestsandmax_secondstruncation support to limit trace replay lengthTesting
pytest tests/unit/scheduler/test_trace_replay.py(pass)pytest tests/unit/benchmark/test_replay_profile.py(pass)pytest tests/unit/data/deserializers/test_trace_synthetic.py(pass)Added tests: scheduling accuracy, boundary conditions, malformed trace handling, empty trace cases, max_requests truncation
test it in practice quickly with NB COLAB
Next Steps (this PR)
Out of Scope (future PRs or not)