-
Notifications
You must be signed in to change notification settings - Fork 1k
feat: expose lua vm pool size as flag #7049
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: vie-serendipity <[email protected]>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello @vie-serendipity, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the system's ability to handle varying workloads by exposing the Lua VM pool sizes as configurable flags. This change allows operators to fine-tune the performance of the resource interpreters, preventing potential bottlenecks when the demand for concurrent interpretation tasks exceeds the default capacity, thereby improving overall scalability and responsiveness. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces configurable Lua VM pool sizes for both configurable and third-party interpreters. It adds new command-line options, ConfigurableInterpreterLuaVMPoolSize and ThirdPartyInterpreterLuaVMPoolSize, to the agent and controller-manager, which are then passed to the resourceinterpreter.NewResourceInterpreter function. The NewConfigurableInterpreter functions for declarative and third-party interpreters are updated to accept these pool sizes, replacing previously hardcoded values. Review comments suggest using a shared constant for hardcoded pool sizes in karmadactl commands and tests, updating function comments for NewConfigurableInterpreter in pkg/resourceinterpreter/customized/declarative/configurable.go and pkg/resourceinterpreter/default/thirdparty/thirdparty.go to describe the new pool parameter, and removing an outdated TODO comment.
| thirdpartyInterpreter := thirdparty.NewConfigurableInterpreter(10) | ||
| configurableInterpreter := declarative.NewConfigurableInterpreter(controlPlaneInformerManager, 10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // NewConfigurableInterpreter builds a new interpreter by registering the | ||
| // event handler to the provided informer instance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function comment for NewConfigurableInterpreter should be updated to describe the new pool parameter, as per the repository's style guide.
| // NewConfigurableInterpreter builds a new interpreter by registering the | |
| // event handler to the provided informer instance. | |
| // NewConfigurableInterpreter builds a new interpreter by registering the | |
| // event handler to the provided informer instance. The pool parameter specifies the size of the Lua VM pool. |
References
- All exported functions must be documented with clear and concise comments describing their purpose and behavior. (link)
| func NewConfigurableInterpreter(informer genericmanager.SingleClusterInformerManager, pool int) *ConfigurableInterpreter { | ||
| return &ConfigurableInterpreter{ | ||
| configManager: configmanager.NewInterpreterConfigManager(informer), | ||
| // TODO: set an appropriate pool size. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| // NewConfigurableInterpreter return a new ConfigurableInterpreter. | ||
| func NewConfigurableInterpreter() *ConfigurableInterpreter { | ||
| func NewConfigurableInterpreter(pool int) *ConfigurableInterpreter { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function comment for NewConfigurableInterpreter should be updated to describe the new pool parameter, as per the repository's style guide.
// NewConfigurableInterpreter return a new ConfigurableInterpreter with a specific Lua VM pool size.References
- All exported functions must be documented with clear and concise comments describing their purpose and behavior. (link)
|
Hi @vie-serendipity, does each operation of the resource interpreter currently occupy a separate Lua VM? |
|
I'm glad to see the feedback on Lua pool size. I would like to know more about the specific impact of the current default value of 10. Will it affect any particular speed? |
I understand that all current operations should be shared, but custom interpreters and third-party interpreter are separate. |
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #7049 +/- ##
==========================================
- Coverage 46.64% 46.56% -0.09%
==========================================
Files 699 700 +1
Lines 48163 48090 -73
==========================================
- Hits 22465 22392 -73
- Misses 24002 24016 +14
+ Partials 1696 1682 -14
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Thanks for the clarification. Based on the current implementation, each Lua script execution acquires a VM from the VM pool and returns it after use. Therefore, it can be understood that both custom interpreters and third-party interpreters can each handle up to 10 concurrent resource interpretations simultaneously. And this concurrency limit could become a bottleneck for QPS. karmada/pkg/resourceinterpreter/customized/declarative/luavm/lua.go Lines 73 to 79 in 1b4e9c9
|
I wanted rb controller to create RB as quickly as possible, so I increased the concurrency, and it was helpful initially. But gradually, concurrency stopped being effective. I identified that it was likely the size limitation of the Lua VM pool that was causing a significant amount of CPU-intensive operations. Obviously, increasing concurrency doesn't improve QPS for CPU-bound operations. |
Does any evidence show that the bottleneck is the size of the Lua VM pool? |
|
I feel that your evidence is somewhat convincing. Besides the revise operation, have there been any other resource interpreter operations that encountered bottlenecks?
This likely refers to creating or updating work resources. |
|
I think it's fine. Let's see how @RainbowMango thinks. Tomorrow is the community meeting; should we just go ahead and talk about it? |
|
Thank you for the guidance of @RainbowMango in the community meeting, my assumption was unconfirmed. Further investigation in the CPU Profile flame graph confirmed that the QPS of RB controller was not due to the creation and destruction of lua vm. The following metrics indicate that despite the pool size being 10, there is no frequent creation and destruction happening.
However, increasing the Lua VM pool size did improve the performance. I doubt that it was due to hot locking, but the evidence shows that it is not the case.
|
|
Finally, I think this might be related to I don't have much familiarity with Lua itself, I won't proceed with further problem localization in the short term. If the person who initially implemented this part had some insights, I would be very grateful to hear your troubleshooting approach. |
That's a nice input and gives us a good reason to dig into it.
Don't worry, @XiShanYongYe-Chang is looking at it. I believe he will leave the findings here. |
|
@vie-serendipity Thank you for providing some direction. |





What type of PR is this?
/kind feature
What this PR does / why we need it:
We should consider to expose lua vm pool size to flags. So it can be adjusted. When
ConcurrentXXXXSyncsis much more than 10, 10 lua vm might not be enough.Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: