increasing resources on the wva controller for ooming#51
Conversation
Signed-off-by: greg pereira <grpereir@redhat.com>
📝 WalkthroughWalkthroughUpdated container resource configurations in the controller-manager deployment manifest: CPU limit increased from 500m to 1, memory limit increased from 128Mi to 1Gi, and memory request increased from 64Mi to 256Mi. CPU request unchanged at 10m. No control flow or other specification fields modified. Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes 🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
config/manager/manager.yaml (1)
137-141: CPU request:limit ratio is 1:100 (10m:1000m).This extreme ratio means the pod is guaranteed very little CPU but can burst to 1 core. Under contention, scheduler may place it on a node where it will be heavily throttled. Consider increasing CPU request to at least 100m if the controller needs sustained CPU for reconciliation loops.
Suggested adjustment
resources: limits: cpu: "1" memory: 1Gi requests: - cpu: 10m + cpu: 100m memory: 256Mi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/manager/manager.yaml` around lines 137 - 141, The CPU request is set extremely low (requests.cpu: 10m) while the limit (cpu: "1") allows bursting to 1 core; update the resource request so it is closer to the expected sustained usage (e.g., set requests.cpu to at least 100m or another value appropriate for the controller’s reconciliation workload) to avoid excessive throttling; locate the resources block that contains cpu: "1" and requests: cpu: 10m in the YAML and increase the requests.cpu value accordingly, keeping memory requests/limits unchanged unless you also need to adjust them.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@config/manager/manager.yaml`:
- Around line 135-141: The Helm chart template
wva-deployment-controller-manager.yaml has resource values that conflict with
the manifest; update the chart so resources.limits.cpu = "1",
resources.limits.memory = "1Gi", resources.requests.cpu = "10m", and
resources.requests.memory = "256Mi" to match the values in manager.yaml (or
alternatively change manager.yaml to the chart values if you prefer the lower
profile) — ensure the four keys (resources.limits.cpu, resources.limits.memory,
resources.requests.cpu, resources.requests.memory) are consistent between the
manifest and the Helm template.
---
Nitpick comments:
In `@config/manager/manager.yaml`:
- Around line 137-141: The CPU request is set extremely low (requests.cpu: 10m)
while the limit (cpu: "1") allows bursting to 1 core; update the resource
request so it is closer to the expected sustained usage (e.g., set requests.cpu
to at least 100m or another value appropriate for the controller’s
reconciliation workload) to avoid excessive throttling; locate the resources
block that contains cpu: "1" and requests: cpu: 10m in the YAML and increase the
requests.cpu value accordingly, keeping memory requests/limits unchanged unless
you also need to adjust them.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Central YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 4165b922-d263-4072-8146-4fe8427dce2c
📒 Files selected for processing (1)
config/manager/manager.yaml
| resources: | ||
| limits: | ||
| cpu: 500m | ||
| memory: 128Mi | ||
| cpu: "1" | ||
| memory: 1Gi | ||
| requests: | ||
| cpu: 10m | ||
| memory: 64Mi | ||
| memory: 256Mi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the Helm chart resource values
fd -t f 'wva-deployment-controller-manager.yaml' --exec cat {} | grep -A 6 'resources:'Repository: opendatahub-io/workload-variant-autoscaler
Length of output: 236
Helm chart resource values conflict with manifest; update charts/workload-variant-autoscaler/templates/manager/wva-deployment-controller-manager.yaml to match.
The Helm chart specifies lower resource limits and requests (memory limit: 512Mi, memory request: 64Mi) compared to this manifest (memory limit: 1Gi, memory request: 256Mi). Deployments using the Helm chart will remain vulnerable to OOM errors due to insufficient memory request allocation. Reconcile the resource values between both files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config/manager/manager.yaml` around lines 135 - 141, The Helm chart template
wva-deployment-controller-manager.yaml has resource values that conflict with
the manifest; update the chart so resources.limits.cpu = "1",
resources.limits.memory = "1Gi", resources.requests.cpu = "10m", and
resources.requests.memory = "256Mi" to match the values in manager.yaml (or
alternatively change manager.yaml to the chart values if you prefer the lower
profile) — ensure the four keys (resources.limits.cpu, resources.limits.memory,
resources.requests.cpu, resources.requests.memory) are consistent between the
manifest and the Helm template.
backport of: red-hat-data-services#70
Summary by CodeRabbit