Skip to content
This repository was archived by the owner on Jun 23, 2025. It is now read-only.
This repository was archived by the owner on Jun 23, 2025. It is now read-only.

benchmark locust tool feature request: update locust requests to match LPG requests #818

Closed as not planned
@annapendleton

Description

@annapendleton

Currently Locust supports a simple format of taking in a list of prompts, output lengths are not considered in the input prompt format. This was done to make it easy to use different benchmarking data sets. LPG only works with the single dataset format, which enables LPG to send requests with various max output lengths.

LPG currently loads dataset directly into the container and at runtime will filter out any prompts with input len and output len that exceed the max.

When sending the request, LPG will input the prompt's output length as the max output length -

Locust requires these updates to match the LPG request behavior:

  1. upload the raw dataset to GCS bucket pathway in https://github.com/GoogleCloudPlatform/ai-on-gke/tree/main/benchmarks/benchmark/dataset/ShareGPT_v3_unflitered_cleaned_split
  2. in load_data.py, update the filtering to take the output_len into account. Save the prompt and output len in local dataset
  3. in tasks.py, load the prompt + output_len in load_dataset function, use the output_len in the request max_output_len field.
  4. (priority TBD) ensure continued support for simple list of prompts format (backwards compatibility to old locust request behavior) - eg. gate the above behavior behind a flag?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions