Skip to content

Conversation

@alt-dima
Copy link

@alt-dima alt-dima commented Nov 21, 2025

Provide a description of what has been changed

Checklist

Fixes #

Relates to #

@alt-dima alt-dima requested a review from a team as a code owner November 21, 2025 12:04
@keda-automation keda-automation requested a review from a team November 21, 2025 12:04
@github-actions
Copy link

Thank you for your contribution! 🙏

Please understand that we will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.

While you are waiting, make sure to:

  • Add an entry in our changelog in alphabetical order and link related issue
  • Update the documentation, if needed
  • Add unit & e2e tests for your changes
  • GitHub checks are passing
  • Is the DCO check failing? Here is how you can fix DCO issues

Once the initial tests are successful, a KEDA member will ensure that the e2e tests are run. Once the e2e tests have been successfully completed, the PR may be merged at a later date. Please be patient.

Learn more about our contribution guide.

@snyk-io
Copy link

snyk-io bot commented Nov 21, 2025

Snyk checks have passed. No issues have been found so far.

Status Scanner Critical High Medium Low Total (0)
Open Source Security 0 0 0 0 0 issues

💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.

@alt-dima alt-dima force-pushed the feature/dimaal/pod-spec-lazy branch from ff658a7 to 66ffe65 Compare November 21, 2025 12:11
@JorTurFer
Copy link
Member

JorTurFer commented Nov 21, 2025

/run-e2e
Update: You can check the progress here

@JorTurFer
Copy link
Member

I think that avoid checking not needed fields is always nice, but why would you like to avoid podSpec? Is there any case you're dealing with?

@alt-dima
Copy link
Author

alt-dima commented Nov 21, 2025

I think that avoid checking not needed fields is always nice, but why would you like to avoid podSpec? Is there any case you're dealing with?

Yes! After update from version 2.16.1 to 2.18.1 i noticed a spike in memory usage from 256Mb to 1Gb!
Some details and investigation:
https://kubernetes.slack.com/archives/CKZJ36A5D/p1763483034433569

In our clusters we do not use any fromEnv options. and in biggest clusters 1000s of pods = leads in a lot of api calls to k8s api and keda's memory usage

I would like to understand why/when behaviour changed? Maybe i missed changelog

image

@JorTurFer
Copy link
Member

Gotcha! now makes sense :)
As we are using cached client, I'm not sure if this will solve or even reduce the load as the manifests are already cached and requests are done to the local cache by the client. have you seen memory improvements after this fix?

@alt-dima
Copy link
Author

alt-dima commented Nov 21, 2025

Gotcha! now makes sense :) As we are using cached client, I'm not sure if this will solve or even reduce the load as the manifests are already cached and requests are done to the local cache by the client. have you seen memory improvements after this fix?

Just deployed on staging and i see decrease from 256 to 145 Mb. and also in pprof looks better now (need to solve secrets too :) )
Can't deploy now to prod biggest cluster to verify drop from 1Gb mem usage

image

And to verify again, deployed non-patched version
image

Copy link
Member

@JorTurFer JorTurFer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice improvement!

@JorTurFer
Copy link
Member

Could you include a record in changelog? I think that under improvements it's nice

@JorTurFer
Copy link
Member

JorTurFer commented Dec 8, 2025

/run-e2e
Update: You can check the progress here

@JorTurFer
Copy link
Member

It looks that unit tests are failing, could you take a look?

@keda-automation keda-automation requested a review from a team December 9, 2025 05:36
Copy link
Member

@zroubalik zroubalik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice improvement!

Could you please check failures in unit tests and add Changelog entry?

@keda-automation keda-automation requested a review from a team December 13, 2025 14:24
Signed-off-by: Dmitriy Altuhov <[email protected]>
Signed-off-by: Dmitriy Altuhov <[email protected]>
Signed-off-by: Dmitriy Altuhov <[email protected]>
@alt-dima
Copy link
Author

Actually, now i'm not sure that memory usage was caused by this PodSpec.
because in version 2.18.2 it was fixed :/
So it was not related to PodSpec. By ChangeLog 2.18.1->2.18.2 i was unable understand which change/PR fixed memory issue

So maybe this PR is not so relevant

image

@rickbrouwer
Copy link
Member

Given that this PR apparently no longer fixes the memory usage issue, and given @JorTurFer 's comment about using a cached client, is this PR still necessary as an improvement? Is there any benefit to this PR, does it still solve something?

@JorTurFer
Copy link
Member

JorTurFer commented Dec 14, 2025

Although the client is cached, requesting items which are not going to be used, increases the memory usage (IIRC, runtime k8s client uses DeepCopy from cached items to ensure consistency), I'm not sure if this makes sense or it's a micro optimization which makes the code harder to understand or really makes sense.

@alt-dima , could you test rebasing your branch with latest dep versions and check if there is any memory reduction from v2.18.2?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants