Which component are you using?:
/area vertical-pod-autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
The current benchmarking code in vertical-pod-autoscaler/benchmark currently only displays step latencies benchmarks for the updater component. We should extend this to include the recommender and the admission-controller.
Describe the solution you'd like.:
Similar benchmarking steps and output tables for the recommender + admission-controller components as the updater.
We should probably wait and scrape recommender metrics during this section of the code:
|
// Step 7: Scale up recommender (not updater yet) |
|
fmt.Println("Scaling up recommender...") |
|
if err := scaleUpRecommender(ctx, kubeClient); err != nil { |
|
return nil, fmt.Errorf("failed to scale up recommender: %v", err) |
|
} |
|
|
|
// Step 8: Wait for VPA recommendations |
|
fmt.Println("Waiting for VPA recommendations...") |
|
wait.PollUntilContextTimeout(ctx, 5*time.Second, 2*time.Minute, true, func(ctx context.Context) (bool, error) { |
|
vpas, _ := vpaClient.AutoscalingV1().VerticalPodAutoscalers(benchmarkNamespace).List(ctx, metav1.ListOptions{}) |
|
withRec := 0 |
|
for _, v := range vpas.Items { |
|
if v.Status.Recommendation != nil { |
|
withRec++ |
|
} |
|
} |
|
fmt.Printf(" VPAs with recommendations: %d/%d\n", withRec, count) |
|
return withRec == count, nil |
|
}) |
(while we are waiting for recommendations).
For the admission-controller, it might make sense for it to be running from beginning to end, and we should calculate it's per-request average time.
Describe any alternative solutions you've considered.:
Additional context.:
Which component are you using?:
/area vertical-pod-autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
The current benchmarking code in
vertical-pod-autoscaler/benchmarkcurrently only displays step latencies benchmarks for the updater component. We should extend this to include the recommender and the admission-controller.Describe the solution you'd like.:
Similar benchmarking steps and output tables for the recommender + admission-controller components as the updater.
We should probably wait and scrape recommender metrics during this section of the code:
autoscaler/vertical-pod-autoscaler/benchmark/main.go
Lines 300 to 318 in 2ceb540
For the admission-controller, it might make sense for it to be running from beginning to end, and we should calculate it's per-request average time.
Describe any alternative solutions you've considered.:
Additional context.: