Skip to content

Conversation

@Abhay349
Copy link
Contributor

@Abhay349 Abhay349 commented Jan 5, 2026

What does this PR do?

This PR adds test cases for
SparkApplication (sparkoperator.k8s.io/v1beta2).

@karmada-bot karmada-bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jan 5, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello @Abhay349, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the test coverage for SparkApplication (sparkoperator.k8s.io/v1beta2) resource interpretation. It introduces new comprehensive test suites for InterpretComponent and InterpretHealth operations, ensuring robust validation of how Spark applications are represented and their health is determined. Additionally, existing test cases for AggregateStatus and InterpretStatus have been expanded to cover more edge cases and ensure correct behavior under various conditions, ultimately improving the reliability of Spark application management.

Highlights

  • New InterpretComponent Test Cases: Introduced a new test file for InterpretComponent of SparkApplication, covering scenarios with and without dynamic allocation, as well as non-JVM jobs to ensure correct component interpretation.
  • New InterpretHealth Test Cases: Added a new test file for InterpretHealth of SparkApplication, validating health status for completed, failed, and missing status scenarios to accurately reflect application health.
  • Expanded AggregateStatus Test Cases: Enhanced the AggregateStatus test suite to include cases for SparkApplications with a single status item and those with no status items, improving robustness for status aggregation.
  • Expanded InterpretStatus Test Cases: Updated the InterpretStatus test cases to cover situations where the observed status of a SparkApplication is empty or missing, ensuring proper handling of incomplete status data.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the test coverage for the SparkApplication resource interpreter by adding several new test cases for status aggregation, component interpretation, health checks, and status reflection. The new tests are well-structured and cover important edge cases. I've identified an issue with an incorrect expected memory value in one of the new test files and a minor formatting issue. Overall, this is a valuable contribution.

@codecov-commenter
Copy link

codecov-commenter commented Jan 5, 2026

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 46.56%. Comparing base (2d6e8f8) to head (4a02960).
⚠️ Report is 9 commits behind head on master.
❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #7087      +/-   ##
==========================================
+ Coverage   46.55%   46.56%   +0.01%     
==========================================
  Files         700      700              
  Lines       48091    48091              
==========================================
+ Hits        22387    22392       +5     
+ Misses      24021    24017       -4     
+ Partials     1683     1682       -1     
Flag Coverage Δ
unittests 46.56% <ø> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Abhay349 Abhay349 force-pushed the sparkapplication-interpreter-tests branch 2 times, most recently from e3886af to 38eee15 Compare January 5, 2026 15:01
@XiShanYongYe-Chang
Copy link
Member

Thanks
/assign

@XiShanYongYe-Chang
Copy link
Member

Hi @FAUST-BENCHOU Would you like to review this PR?

@FAUST-BENCHOU
Copy link
Contributor

Hi @FAUST-BENCHOU Would you like to review this PR?

Sure!let me take a look at relative lua script first.

@FAUST-BENCHOU
Copy link
Contributor

The core tests are covered, but the boundary case tests are relatively weak. Below are some of my suggestions for improvement (of course, you can extend these suggestions as well).

AggregateStatus

1.Several status items have missing fields: Some untested items may lack fields such as executionAttempts and submissionAttempts.
2.The comparison and other logic for lastSubmissionAttemptTime and terminationTime.

if currentStatus.lastSubmissionAttemptTime and (lastSubmissionAttemptTime == "" or currentStatus.lastSubmissionAttemptTime > lastSubmissionAttemptTime) then
lastSubmissionAttemptTime = currentStatus.lastSubmissionAttemptTime
end
if currentStatus.terminationTime and (terminationTime == "" or currentStatus.terminationTime > terminationTime) then

InterpretComponent

1.GPU resources related

local drv_gpu = get(observedObj, {"spec","driver","gpu"})
if drv_gpu ~= nil then
local gpu_name = drv_gpu.name
local gpu_qty = to_num(drv_gpu.quantity, 0)
if not isempty(gpu_name) and gpu_qty > 0 then
drv_requires.resourceRequest[gpu_name] = gpu_qty
end
end

2.some default value like executor.instances driver.memory executor.memory......
3.Pod template related
local function apply_pod_template(pt_spec, requires)
if pt_spec == nil then
return
end
local nodeSelector = clone_plain(pt_spec.nodeSelector)
local tolerations = clone_plain(pt_spec.tolerations)
local priority = pt_spec.priorityClassName
local hardNodeAffinity = nil
if pt_spec.affinity and pt_spec.affinity.nodeAffinity then
hardNodeAffinity = clone_plain(pt_spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution)
end
-- Only create nodeClaim if there is content
if nodeSelector ~= nil or tolerations ~= nil or hardNodeAffinity ~= nil then
requires.nodeClaim = requires.nodeClaim or {}
requires.nodeClaim.nodeSelector = nodeSelector
requires.nodeClaim.tolerations = tolerations
requires.nodeClaim.hardNodeAffinity = hardNodeAffinity
end
if not isempty(priority) then
requires.priorityClassName = priority
end
end

InterpretHealth

all in all good.maybe you can also consider scenarios where state is an empty string or applicationState exists but the state field is missing.

InterpretStatus (ReflectStatus)

good but maybe u can consider that when applicationState, driverInfo, executorState, etc. are nil, a copy will be made.

@Abhay349 Thanks for you awesome job!

@FAUST-BENCHOU
Copy link
Contributor

@XiShanYongYe-Chang This is just my opinion; any feedback is welcome.🙃

@Abhay349
Copy link
Contributor Author

Abhay349 commented Jan 6, 2026

@FAUST-BENCHOU Thanks for review..
let me look on those issues.

@Abhay349
Copy link
Contributor Author

Abhay349 commented Jan 6, 2026

Thanks a lot for the review and suggestions
I’m a beginner with these interpreter tests, so this feedback really helps.
I just wanted to confirm, for cases where some status fields are missing, should we add explicit tests for those scenarios or treat them as default values?

I’ll update the tests accordingly. Thanks again!

@FAUST-BENCHOU
Copy link
Contributor

Thanks a lot for the review and suggestions I’m a beginner with these interpreter tests, so this feedback really helps. I just wanted to confirm, for cases where some status fields are missing, should we add explicit tests for those scenarios or treat them as default values?

I’ll update the tests accordingly. Thanks again!

I think we should explicitly test the boundary cases of missing fields, rather than relying solely on default values, to ensure the logic is correct when fields are missing.

@Abhay349
Copy link
Contributor Author

Abhay349 commented Jan 6, 2026

Sure, I will fix it shortly..
Thanks :)

@Abhay349 Abhay349 force-pushed the sparkapplication-interpreter-tests branch from 38eee15 to 2f0a1b3 Compare January 6, 2026 19:56
@karmada-bot karmada-bot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 6, 2026
@Abhay349 Abhay349 force-pushed the sparkapplication-interpreter-tests branch from 2f0a1b3 to 4a02960 Compare January 6, 2026 19:56
@Abhay349
Copy link
Contributor Author

Abhay349 commented Jan 6, 2026

hi @FAUST-BENCHOU @XiShanYongYe-Chang, I have added required changes,
happy to iterate on further feedback..

Thanks

@XiShanYongYe-Chang
Copy link
Member

Let's wait for @FAUST-BENCHOU to confirm it again.

@FAUST-BENCHOU
Copy link
Contributor

@XiShanYongYe-Chang overall great!

Copy link
Member

@XiShanYongYe-Chang XiShanYongYe-Chang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Abhay349 @FAUST-BENCHOU
/lgtm
/approve

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 7, 2026
@karmada-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: XiShanYongYe-Chang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 7, 2026
@karmada-bot karmada-bot merged commit 050353e into karmada-io:master Jan 7, 2026
21 checks passed
state: ""
operation: InterpretHealth
output:
healthy: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SparkApplication unhealthy when applicationState.state is empty
why is the output healthy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to logic,
empty string "" !="FAILED",so state == "" is treated as healthy

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got that,maybe you can change comment to like SparkApplication healthy when applicationState.state is empty

applicationState: {}
operation: InterpretStatus
output:
status: {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The message is "InterpretStatus should copy applicationState even if the state field is missing".but the expected output is: status: {}. This is inconsistent with the description of "copy applicationState" (if it is to be copied, the applicationState field should be included, even if the map is empty).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for description issue,
"InterpretStatus does not copy applicationState when state field is missing", will it work?

# case4. SparkApplication with missing attempts fields
# case5. SparkApplication with missing executorState
# case6. SparkApplication with missing lastSubmissionAttemptTime
# case7. SparkApplication with missing terminationTime
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comments need to be updated; they don't match the test cases.

@XiShanYongYe-Chang
Copy link
Member

Hi @Abhay349, just as @FAUST-BENCHOU said, some use case descriptions seem to be inconsistent with the content. Could you help revise the descriptions of the use cases? (The test cases themselves are fine.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants