Open
Description
Currently Locust supports a simple format of taking in a list of prompts, output lengths are not considered in the input prompt format. This was done to make it easy to use different benchmarking data sets. LPG only works with the single dataset format, which enables LPG to send requests with various max output lengths.
LPG currently loads dataset directly into the container and at runtime will filter out any prompts with input len and output len that exceed the max.
When sending the request, LPG will input the prompt's output length as the max output length -
Locust requires these updates to match the LPG request behavior:
- upload the raw dataset to GCS bucket pathway in https://github.com/GoogleCloudPlatform/ai-on-gke/tree/main/benchmarks/benchmark/dataset/ShareGPT_v3_unflitered_cleaned_split
- in load_data.py, update the filtering to take the output_len into account. Save the prompt and output len in local dataset
- in tasks.py, load the prompt + output_len in load_dataset function, use the output_len in the request max_output_len field.
- (priority TBD) ensure continued support for simple list of prompts format (backwards compatibility to old locust request behavior) - eg. gate the above behavior behind a flag?
Metadata
Assignees
Labels
No labels
Activity