Skip to content

Conversation

@vie-serendipity
Copy link
Contributor

What type of PR is this?

/kind feature

What this PR does / why we need it:

We should consider to expose lua vm pool size to flags. So it can be adjusted. When ConcurrentXXXXSyncs is much more than 10, 10 lua vm might not be enough.

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

1. `karmada-controller-manager`: Add new flags(--configurable-interpreter-lua-vm-pool-size and --third-party-interpreter-lua-vm-pool-size)
2. `karmada-agent`: Add new flags(--configurable-interpreter-lua-vm-pool-size and --third-party-interpreter-lua-vm-pool-size)

@karmada-bot karmada-bot added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 26, 2025
@karmada-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign ikaven1024, lonelycz for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Dec 26, 2025
@gemini-code-assist
Copy link

Summary of Changes

Hello @vie-serendipity, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the system's ability to handle varying workloads by exposing the Lua VM pool sizes as configurable flags. This change allows operators to fine-tune the performance of the resource interpreters, preventing potential bottlenecks when the demand for concurrent interpretation tasks exceeds the default capacity, thereby improving overall scalability and responsiveness.

Highlights

  • New Configuration Flags: Introduced two new command-line flags: --configurable-interpreter-lua-vm-pool-size and --third-party-interpreter-lua-vm-pool-size for both karmada-agent and karmada-controller-manager.
  • Adjustable Lua VM Pool Size: The size of the Lua Virtual Machine (VM) pools for configurable and third-party interpreters can now be dynamically adjusted via these new flags, with a default value of 10.
  • Resource Interpreter Initialization: The resourceinterpreter components now accept and utilize these configurable pool sizes during their initialization, allowing for better performance tuning.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces configurable Lua VM pool sizes for both configurable and third-party interpreters. It adds new command-line options, ConfigurableInterpreterLuaVMPoolSize and ThirdPartyInterpreterLuaVMPoolSize, to the agent and controller-manager, which are then passed to the resourceinterpreter.NewResourceInterpreter function. The NewConfigurableInterpreter functions for declarative and third-party interpreters are updated to accept these pool sizes, replacing previously hardcoded values. Review comments suggest using a shared constant for hardcoded pool sizes in karmadactl commands and tests, updating function comments for NewConfigurableInterpreter in pkg/resourceinterpreter/customized/declarative/configurable.go and pkg/resourceinterpreter/default/thirdparty/thirdparty.go to describe the new pool parameter, and removing an outdated TODO comment.

Comment on lines +467 to +468
thirdpartyInterpreter := thirdparty.NewConfigurableInterpreter(10)
configurableInterpreter := declarative.NewConfigurableInterpreter(controlPlaneInformerManager, 10)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to pkg/karmadactl/interpret/execute.go, the pool size 10 is hardcoded here. To improve maintainability and avoid magic numbers, it would be better to define a constant for this default value in a shared location and use it consistently across the codebase.

Comment on lines 43 to 44
// NewConfigurableInterpreter builds a new interpreter by registering the
// event handler to the provided informer instance.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function comment for NewConfigurableInterpreter should be updated to describe the new pool parameter, as per the repository's style guide.

Suggested change
// NewConfigurableInterpreter builds a new interpreter by registering the
// event handler to the provided informer instance.
// NewConfigurableInterpreter builds a new interpreter by registering the
// event handler to the provided informer instance. The pool parameter specifies the size of the Lua VM pool.
References
  1. All exported functions must be documented with clear and concise comments describing their purpose and behavior. (link)

func NewConfigurableInterpreter(informer genericmanager.SingleClusterInformerManager, pool int) *ConfigurableInterpreter {
return &ConfigurableInterpreter{
configManager: configmanager.NewInterpreterConfigManager(informer),
// TODO: set an appropriate pool size.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This TODO comment can be removed as the pool size is now configurable via the pool parameter.


// NewConfigurableInterpreter return a new ConfigurableInterpreter.
func NewConfigurableInterpreter() *ConfigurableInterpreter {
func NewConfigurableInterpreter(pool int) *ConfigurableInterpreter {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function comment for NewConfigurableInterpreter should be updated to describe the new pool parameter, as per the repository's style guide.

// NewConfigurableInterpreter return a new ConfigurableInterpreter with a specific Lua VM pool size.
References
  1. All exported functions must be documented with clear and concise comments describing their purpose and behavior. (link)

@zhzhuang-zju
Copy link
Contributor

Hi @vie-serendipity, does each operation of the resource interpreter currently occupy a separate Lua VM?

@XiShanYongYe-Chang
Copy link
Member

I'm glad to see the feedback on Lua pool size.

I would like to know more about the specific impact of the current default value of 10. Will it affect any particular speed?

@XiShanYongYe-Chang
Copy link
Member

Hi @vie-serendipity, does each operation of the resource interpreter currently occupy a separate Lua VM?

I understand that all current operations should be shared, but custom interpreters and third-party interpreter are separate.

@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

❌ Patch coverage is 5.26316% with 18 lines in your changes missing coverage. Please review.
✅ Project coverage is 46.56%. Comparing base (7f726b6) to head (d073525).
⚠️ Report is 50 commits behind head on master.

Files with missing lines Patch % Lines
pkg/resourceinterpreter/interpreter.go 0.00% 7 Missing ⚠️
cmd/agent/app/options/options.go 0.00% 2 Missing ⚠️
cmd/controller-manager/app/options/options.go 0.00% 2 Missing ⚠️
pkg/karmadactl/promote/promote.go 0.00% 2 Missing ⚠️
...interpreter/customized/declarative/configurable.go 0.00% 2 Missing ⚠️
cmd/agent/app/agent.go 0.00% 1 Missing ⚠️
cmd/controller-manager/app/controllermanager.go 0.00% 1 Missing ⚠️
...sourceinterpreter/default/thirdparty/thirdparty.go 0.00% 1 Missing ⚠️
❗ Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #7049      +/-   ##
==========================================
- Coverage   46.64%   46.56%   -0.09%     
==========================================
  Files         699      700       +1     
  Lines       48163    48090      -73     
==========================================
- Hits        22465    22392      -73     
- Misses      24002    24016      +14     
+ Partials     1696     1682      -14     
Flag Coverage Δ
unittests 46.56% <5.26%> (-0.09%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@zhzhuang-zju
Copy link
Contributor

zhzhuang-zju commented Dec 26, 2025

I understand that all current operations should be shared, but custom interpreters and third-party interpreter are separate.

Thanks for the clarification. Based on the current implementation, each Lua script execution acquires a VM from the VM pool and returns it after use. Therefore, it can be understood that both custom interpreters and third-party interpreters can each handle up to 10 concurrent resource interpretations simultaneously. And this concurrency limit could become a bottleneck for QPS.

// RunScript got a lua vm from pool, and execute script with given arguments.
func (vm *VM) RunScript(script string, fnName string, nRets int, args ...interface{}) ([]lua.LValue, error) {
a, err := vm.Pool.Get()
if err != nil {
return nil, err
}
defer vm.Pool.Put(a)

@vie-serendipity
Copy link
Contributor Author

I would like to know more about the specific impact of the current default value of 10. Will it affect any particular speed?

I wanted rb controller to create RB as quickly as possible, so I increased the concurrency, and it was helpful initially. But gradually, concurrency stopped being effective.

I identified that it was likely the size limitation of the Lua VM pool that was causing a significant amount of CPU-intensive operations. Obviously, increasing concurrency doesn't improve QPS for CPU-bound operations.

@RainbowMango
Copy link
Member

I identified that it was likely the size limitation of the Lua VM pool that was causing a significant amount of CPU-intensive operations. Obviously, increasing concurrency doesn't improve QPS for CPU-bound operations.

Does any evidence show that the bottleneck is the size of the Lua VM pool?

@vie-serendipity
Copy link
Contributor Author

  1. Flame graphs reveal that the revise operation is taking a significant amount of time, while other client activities are mostly IO operations. Theoretically, concurrency should be able to improve IO operations.
image
  1. After I increased lua vm pool size, the RB controller QPS increases accordingly.

@XiShanYongYe-Chang
Copy link
Member

I feel that your evidence is somewhat convincing.

Besides the revise operation, have there been any other resource interpreter operations that encountered bottlenecks?

I wanted rb controller to create RB as quickly as possible

This likely refers to creating or updating work resources.

@vie-serendipity
Copy link
Contributor Author

Besides the revise operation, have there been any other resource interpreter operations that encountered bottlenecks?

The detector, RB controller, and RB status controller all have this bottleneck, as they are all quite dependent on the interpreter's work.

This likely refers to creating or updating work resources.

Yes, you're absolutely right. Sry for my mistake. I capture a new image.
image

@XiShanYongYe-Chang
Copy link
Member

I think it's fine. Let's see how @RainbowMango thinks. Tomorrow is the community meeting; should we just go ahead and talk about it?

@vie-serendipity
Copy link
Contributor Author

Thank you for the guidance of @RainbowMango in the community meeting, my assumption was unconfirmed.

Further investigation in the CPU Profile flame graph confirmed that the QPS of RB controller was not due to the creation and destruction of lua vm.

The following metrics indicate that despite the pool size being 10, there is no frequent creation and destruction happening.
image

image

However, increasing the Lua VM pool size did improve the performance. I doubt that it was due to hot locking, but the evidence shows that it is not the case.

image

@vie-serendipity
Copy link
Contributor Author

Finally, I think this might be related to github.com/yuin/gopher-lua implementation. I have temporarily achieved the desired outcome by adjusting the pool size.

I don't have much familiarity with Lua itself, I won't proceed with further problem localization in the short term. If the person who initially implemented this part had some insights, I would be very grateful to hear your troubleshooting approach.

@RainbowMango
Copy link
Member

Finally, I think this might be related to github.com/yuin/gopher-lua implementation. I have temporarily achieved the desired outcome by adjusting the pool size.

That's a nice input and gives us a good reason to dig into it.

I don't have much familiarity with Lua itself, I won't proceed with further problem localization in the short term. If the person who initially implemented this part had some insights, I would be very grateful to hear your troubleshooting approach.

Don't worry, @XiShanYongYe-Chang is looking at it. I believe he will leave the findings here.

@RainbowMango RainbowMango added this to the v1.17 milestone Jan 6, 2026
@XiShanYongYe-Chang
Copy link
Member

@vie-serendipity Thank you for providing some direction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kind/feature Categorizes issue or PR as related to a new feature. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants