Skip to content

Conversation

@maanas-23
Copy link
Contributor

@maanas-23 maanas-23 commented Feb 3, 2026

Summary by CodeRabbit

  • New Features

    • Added optimizeGuaranteedPods configuration option to control whether guaranteed pods are eligible for optimization.
    • CPU optimization for guaranteed pods is now supported when enabled via configuration.
  • Improvements

    • Simplified memory request and limit optimization logic.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 3, 2026

Walkthrough

A new configuration option optimizeGuaranteedPods is introduced to control whether guaranteed pods should be optimized. The config field is defined in both the YAML file and Go struct, and the pod recommendation logic is refactored to respect this flag while simplifying memory handling and enabling CPU optimization for guaranteed pods.

Changes

Cohort / File(s) Summary
Configuration Fields
config.yaml, pkg/config/config.go
Added new boolean field optimizeGuaranteedPods to enable/disable optimization of guaranteed pods via configuration.
Pod Optimization Logic
pkg/task/taskApplyRecommendation.go
Refactored guaranteed pod handling to check configuration flag instead of unconditionally skipping; removed memory equality synchronization branch; removed early return for guaranteed pod CPU changes, allowing CPU updates to proceed.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 A carrot-shaped config flag now appears,
To guide which pods we optimize with care,
No more guaranteed pods left in the shadows,
Memory logic simplified, CPU runs free,
Configuration blooms—flexibility takes the stage! 🌱

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'allow optimizing guaranteed pods' directly and clearly summarizes the main change—enabling optimization of guaranteed pods via a new configuration flag.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch optimize-guaranteed

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@pkg/task/taskApplyRecommendation.go`:
- Around line 318-321: The memory-clamping logic in applyMemoryRecommendation()
can lower a guaranteed container's request while its limit remains unchanged,
converting it to Burstable QoS on clusters that can't reduce limits; modify
applyMemoryRecommendation() to first detect guaranteed containers by checking if
OptimizeGuaranteedPods is true and currentMemoryLimit == currentMemoryRequest
(or request==limit) and skip any memory update (i.e., do not adjust
recommendedMemoryLimit/request) for those containers until limit-reduction
support exists, leaving the existing Math.Ceil clamp only for non-guaranteed
containers.

Comment on lines +318 to 321
// TODO: will be possible from 1.34
if recommendedMemoryLimit < currentMemoryLimit {
recommendedMemoryLimit = math.Ceil(currentMemoryLimit)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

wc -l pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 105


🏁 Script executed:

sed -n '310,330p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 1001


🏁 Script executed:

sed -n '300,340p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 1764


🏁 Script executed:

sed -n '450,475p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 1129


🏁 Script executed:

sed -n '280,350p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 3206


🏁 Script executed:

grep -n "func.*Apply\|OptimizeGuaranteedPods\|IsGuaranteedPod" pkg/task/taskApplyRecommendation.go | head -20

Repository: truefoundry/CruiseKube

Length of output: 1414


🏁 Script executed:

sed -n '1,50p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 1720


🏁 Script executed:

sed -n '153,210p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 2607


🏁 Script executed:

grep -n "applyMemoryRecommendation\|applyCPURecommendation" pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 448


🏁 Script executed:

sed -n '195,250p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 2209


🏁 Script executed:

sed -n '433,480p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 2272


🏁 Script executed:

grep -n "IsGuaranteedPod" pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 161


🏁 Script executed:

rg "IsGuaranteedPod" --context 3

Repository: truefoundry/CruiseKube

Length of output: 1075


🏁 Script executed:

rg "type PodInfo" pkg/task/utils/ -A 15

Repository: truefoundry/CruiseKube

Length of output: 1331


🏁 Script executed:

sed -n '314,342p' pkg/task/utils/node_stats.go

Repository: truefoundry/CruiseKube

Length of output: 48


🏁 Script executed:

grep -B 5 -A 10 "func.*IsGuaranteedPod" pkg/task/utils/node_stats.go

Repository: truefoundry/CruiseKube

Length of output: 560


🏁 Script executed:

sed -n '286,342p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 2557


🏁 Script executed:

sed -n '220,270p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 2383


🏁 Script executed:

grep -n "OptimizeGuaranteedPods" pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 161


🏁 Script executed:

rg "OptimizeGuaranteedPods" --context 2

Repository: truefoundry/CruiseKube

Length of output: 930


🏁 Script executed:

sed -n '354,420p' pkg/task/taskApplyRecommendation.go

Repository: truefoundry/CruiseKube

Length of output: 2824


🏁 Script executed:

grep -B 5 -A 15 "func.*applyCPURecommendation" pkg/task/taskApplyRecommendation.go | head -40

Repository: truefoundry/CruiseKube

Length of output: 783


Prevent QoS degradation for guaranteed containers when memory limits cannot be reduced.

When OptimizeGuaranteedPods is true, guaranteed containers (where request==limit) can reach applyMemoryRecommendation() and the memory clamping logic at lines 318-321 may allow the request to be lowered while the limit remains unchanged. This converts the container to Burstable QoS on clusters that don't yet support limit reduction. To ensure guaranteed containers stay protected, skip memory updates for containers where the current limit equals the current request until limit updates are supported.

Suggested guard
 	// TODO: will be possible from 1.34
 	if recommendedMemoryLimit < currentMemoryLimit {
+		if currentMemoryLimit == currentMemoryRequest {
+			logging.Infof(ctx, "Skipping memory recommendation for guaranteed pod %s/%s; limit cannot be decreased yet", rec.PodInfo.Namespace, rec.PodInfo.Name)
+			return false, true, nil
+		}
 		recommendedMemoryLimit = math.Ceil(currentMemoryLimit)
 	}
🤖 Prompt for AI Agents
In `@pkg/task/taskApplyRecommendation.go` around lines 318 - 321, The
memory-clamping logic in applyMemoryRecommendation() can lower a guaranteed
container's request while its limit remains unchanged, converting it to
Burstable QoS on clusters that can't reduce limits; modify
applyMemoryRecommendation() to first detect guaranteed containers by checking if
OptimizeGuaranteedPods is true and currentMemoryLimit == currentMemoryRequest
(or request==limit) and skip any memory update (i.e., do not adjust
recommendedMemoryLimit/request) for those containers until limit-reduction
support exists, leaving the existing Math.Ceil clamp only for non-guaranteed
containers.

currentMemoryLimitQuantity := currentContainerResources.Limits[corev1.ResourceMemory]
currentMemoryLimit := float64(currentMemoryLimitQuantity.Value()) / utils.BytesToMBDivisor

if currentMemoryRequest == currentMemoryLimit {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to mark a pod that has either equal request and limit for memory or cpu for eviction like here -

sort.Slice(podInfosClone, func(i, j int) bool {

so that it can be evicted and recreated with corrected request and limit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants