Skip to content

[RFC] Dynamic client ramp up for redline and baseline testing #729

@rishabh6788

Description

@rishabh6788

Is your feature request related to a problem? Please describe

We have recently added long pending client ramp up feature in opensearch-benchmark which when used, gradually ramps up the clients based on number of clients and time period provided using ramp-up-time-period field in task definition. For e..g, using the task definition given below each client will take client-num*(ramp-up-time-period/total-clients), which is 0 for 0th client (read first), 90s for second, so on and so forth.

{
 "operation": "cardinality-agg-high",
 "warmup-time-period": 1800,
 "ramp-up-time-period": 1800,
 "time-period": 300,
 "target-throughput": 20,
 "clients": 20
 } 

Compared to scenario before this feature, the opensearch-benchmark would start running benchmark with 20 clients in parallel from beginning, which will quickly overwhelm the cluster and the user would never be able figure out what was the actual breaking point of the cluster.

While this helps alleviate this pain and provides a means to figure out how the cluster performance, be it query latency, cpu or jvm utilization, is getting impacted as the load gradually increases.

The downside is that the user has to have some sort of idea about the number of clients at about how much load the cluster performance is impacted. At the end of the run opensearch-benchmark just provides a final result which just shows the final query latency and server side throughput. Even though user can use a dedicated opensearch cluster as datastore and chart different metrics to figure out when the cluster started going under duress, it is still quite some effort and needs advanced dashboarding knowledge.

Describe the solution you'd like

It would be great to have a benchmark mode where the user just provides a target qps to achieve along with certain constraints, such as the query should maintainer certain level of latency threshold or max overall cpu threshold should no exceed 90% or query success/error rate should not g below 90% etc, anything beyond the provided threshold the benchmark should auto adjust the number of clients to maintain performance under given thresholds.
At the end of the run the benchmark result should provide what was final qps it was able to hit while maintaining all the constraints along with publishing metrics around query latency and success/error rate

Describe alternatives you've considered

No response

Additional context

No response

Sub-issues

Metadata

Metadata

Labels

RFCRequest for comment on major changesenhancementNew feature or request

Type

No type

Projects

Status

New

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions