-
Notifications
You must be signed in to change notification settings - Fork 72
feat(controller): Add ResourceTTL configuration field for DSPAs #954
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
📝 WalkthroughWalkthroughAdds an optional Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (7)
🚧 Files skipped from review as they are similar to previous changes (5)
🧰 Additional context used🧬 Code graph analysis (1)controllers/dspipeline_params.go (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
🔇 Additional comments (7)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
A new image has been built to help with testing out this PR: To use this image run the following: cd $(mktemp -d)
git clone [email protected]:opendatahub-io/data-science-pipelines-operator.git
cd data-science-pipelines-operator/
git fetch origin pull/954/head
git checkout -b pullrequest 867d987c4b7c0a7fe3b781c9db1d251b28612b1d
oc new-project opendatahub
make deploy IMG="quay.io/opendatahub/data-science-pipelines-operator:pr-954"More instructions here on how to deploy and test a Data Science Pipelines Application. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
controllers/dspipeline_params.go (1)
553-580: Consider logging on JSON marshal failure.The implementation is correct and handles nil cases properly. However, silently returning an empty string on
json.Marshalerror (lines 576-578) could hide unexpected issues. While this is unlikely to fail with simple maps, adding a log message would improve debuggability.🔎 Optional: Add error logging
patchJSON, err := json.Marshal(patch) if err != nil { + // Log error for debugging but don't fail - empty patch is a safe fallback + // log.V(1).Info("Failed to marshal compiled pipeline spec patch", "error", err) return "" } return string(patchJSON)Note: This would require passing a logger to the method or using a package-level logger.
config/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yaml (1)
154-161: Schema looks solid! Consider clarifying "completes" in the description.The field definition is well-structured with appropriate type (
int32), validation (minimum: 0), and clear purpose. The integration with Argo'sttlStrategy.secondsAfterCompletionis correctly documented.Optional: Enhance description clarity
Consider making the description more explicit about completion states to avoid ambiguity:
resourceTTL: description: |- - ResourceTTL specifies the number of seconds after a pipeline run completes before its + ResourceTTL specifies the number of seconds after a pipeline run completes (success or failure) before its Workflow resources (pods, etc.) are automatically deleted. This helps with cluster resource - cleanup. When set, the ttlStrategy.secondsAfterCompletion field is applied to all pipeline runs. + cleanup. When set, the ttlStrategy.secondsAfterCompletion field is applied to all pipeline runs. + Setting to 0 results in immediate deletion after completion. format: int32 minimum: 0 type: integerThis clarifies:
- Completion includes both success and failure states
- The behavior when set to
0(edge case documentation)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
api/v1/dspipeline_types.goapi/v1/zz_generated.deepcopy.goconfig/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yamlconfig/internal/apiserver/default/deployment.yaml.tmplcontrollers/dspipeline_params.gocontrollers/dspipeline_params_test.gocontrollers/testdata/declarative/case_2/deploy/cr.yamlcontrollers/testdata/declarative/case_2/expected/created/apiserver_deployment.yamlcontrollers/testutil/util.go
🧰 Additional context used
🧬 Code graph analysis (3)
controllers/dspipeline_params_test.go (4)
controllers/dspipeline_params.go (1)
DSPAParams(48-111)api/v1/dspipeline_types.go (1)
APIServer(79-173)controllers/dspipeline_fake_controller.go (1)
CreateNewTestObjects(65-67)controllers/testutil/util.go (2)
CreateDSPAWithResourceTTL(280-290)CreateEmptyDSPA(214-236)
controllers/dspipeline_params.go (1)
api/v1/dspipeline_types.go (1)
APIServer(79-173)
controllers/testutil/util.go (1)
api/v1/dspipeline_types.go (3)
DataSciencePipelinesApplication(448-453)MLMD(319-326)APIServer(79-173)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: functest
- GitHub Check: precommit
- GitHub Check: dspo-tests
- GitHub Check: dspo-byoargo-tests
- GitHub Check: unittest
🔇 Additional comments (11)
controllers/testdata/declarative/case_2/deploy/cr.yaml (1)
17-17: LGTM!The
resourceTTL: 3600field is correctly placed underapiServerand uses a reasonable test value (1 hour). This aligns with the newResourceTTLfield in the APIServer struct.config/internal/apiserver/default/deployment.yaml.tmpl (1)
151-154: LGTM!The conditional injection of
COMPILED_PIPELINE_SPEC_PATCHis correctly implemented. The env var is only added when the patch value is non-empty, and the single-quoted JSON value is appropriate for YAML syntax.api/v1/zz_generated.deepcopy.go (1)
66-70: LGTM!The auto-generated deep copy logic for
ResourceTTLfollows the established pattern for pointer fields, correctly allocating new memory and copying the value only when non-nil.api/v1/dspipeline_types.go (1)
166-172: LGTM!Well-designed API addition. Using
*int32correctly distinguishes between unset (nil) and zero (immediate cleanup). TheMinimum=0validation prevents negative values, and the documentation clearly explains the TTL behavior.controllers/testdata/declarative/case_2/expected/created/apiserver_deployment.yaml (1)
109-110: LGTM!The expected
COMPILED_PIPELINE_SPEC_PATCHenv var correctly reflects theresourceTTL: 3600from the input CR, with proper JSON formatting for thettlStrategypatch.controllers/testutil/util.go (1)
280-290: LGTM!The test helper follows the established pattern of other
CreateDSPAWith*functions, correctly setting all required fields and using a pointer to the TTL parameter.controllers/dspipeline_params_test.go (2)
320-402: LGTM - Comprehensive test coverage for the patch builder.Good test scenarios covering nil APIServer, unset ResourceTTL, zero value, and large values. The JSON unmarshaling verification ensures the patch structure is correct.
404-423: LGTM - Integration test validates end-to-end patch generation.Properly verifies that
ExtractParamspopulatesCompiledPipelineSpecPatchwith the expectedttlStrategystructure.controllers/dspipeline_params.go (2)
107-110: LGTM - Well-documented new field.The
CompiledPipelineSpecPatchfield is clearly documented for its purpose of applying global workflow configuration patches.
639-640: LGTM - Correct integration point.The patch is built and assigned at the appropriate point in
ExtractParams, afterProxyConfigis set and before logging initialization usesp.Name.config/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yaml (1)
154-161: AI summary contains incorrect field path.The AI summary states the field is located at
spec.apiServer.pipelineStore.resourceTTL, but the actual schema correctly places it atspec.apiServer.resourceTTL(a sibling topipelineStore, not a child).The implementation in the CRD is correct; only the summary description is inaccurate.
|
Change to PR detected. A new PR build was completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
config/overlays/make-deploy/kustomization.yamlcontrollers/apiserver_test.gocontrollers/dspipeline_params_test.gocontrollers/mlmd_test.gocontrollers/testutil/util.gocontrollers/workflow_controller_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- controllers/dspipeline_params_test.go
🧰 Additional context used
🧬 Code graph analysis (4)
controllers/mlmd_test.go (1)
controllers/testutil/util.go (1)
BoolPtr(263-265)
controllers/testutil/util.go (1)
api/v1/dspipeline_types.go (4)
DataSciencePipelinesApplication(448-453)MLMD(319-326)APIServer(79-173)DSPASpec(25-58)
controllers/workflow_controller_test.go (1)
controllers/testutil/util.go (1)
BoolPtr(263-265)
controllers/apiserver_test.go (1)
controllers/testutil/util.go (1)
BoolPtr(263-265)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: functest
- GitHub Check: dspo-tests
- GitHub Check: dspo-byoargo-tests
- GitHub Check: unittest
🔇 Additional comments (6)
controllers/mlmd_test.go (1)
26-26: LGTM! Consistent refactoring to centralized helper.The refactoring from local
boolPtrtotestutil.BoolPtrimproves code maintainability by consolidating the boolean pointer helper function across the test suite.Also applies to: 44-44, 135-135, 178-178, 268-268, 340-340, 412-412
controllers/testutil/util.go (3)
262-265: LGTM! Well-implemented helper function.The exported
BoolPtrhelper follows Go conventions and consolidates boolean pointer creation across the test suite, improving code maintainability.
281-291: LGTM! Proper test helper for ResourceTTL feature.The
CreateDSPAWithResourceTTLhelper correctly initializes a DSPA with:
- DSPVersion set to "v2" (required for the feature)
- PodToPodTLS disabled for testing
- MLMD enabled (required dependency)
- ResourceTTL field properly set
This follows the existing pattern of other helper functions and aligns with the PR objectives.
217-217: LGTM! Consistent refactoring.Existing helper functions correctly updated to use the new centralized
BoolPtrfunction.Also applies to: 257-257, 271-271, 299-299
controllers/apiserver_test.go (1)
24-24: LGTM! Consistent test utility adoption.The refactoring to use
testutil.BoolPtraligns with the codebase-wide consolidation of boolean pointer helpers.Also applies to: 39-39, 137-137
controllers/workflow_controller_test.go (1)
24-24: LGTM! Completes the test utility refactoring.The refactoring to use
testutil.BoolPtrcompletes the codebase-wide consolidation of boolean pointer helpers across all test files.Also applies to: 38-38, 137-137, 224-224, 280-280
| - name: controller | ||
| newName: quay.io/opendatahub/data-science-pipelines-operator | ||
| newTag: main | ||
| newName: quay.io/gmfrasca/dspo | ||
| newTag: rttl-v0.0.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revert personal repository reference before merging.
The image configuration points to a personal repository (quay.io/gmfrasca/dspo:rttl-v0.0.1) instead of the official repository. This must be reverted to the original values before this PR can be merged to main, as it would break production deployments.
🔎 Expected configuration for production
images:
- name: controller
- newName: quay.io/gmfrasca/dspo
- newTag: rttl-v0.0.1
+ newName: quay.io/opendatahub/data-science-pipelines-operator
+ newTag: main📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: controller | |
| newName: quay.io/opendatahub/data-science-pipelines-operator | |
| newTag: main | |
| newName: quay.io/gmfrasca/dspo | |
| newTag: rttl-v0.0.1 | |
| - name: controller | |
| newName: quay.io/opendatahub/data-science-pipelines-operator | |
| newTag: main |
🤖 Prompt for AI Agents
In config/overlays/make-deploy/kustomization.yaml around lines 10 to 12, the
image is set to a personal repository (quay.io/gmfrasca/dspo:rttl-v0.0.0) —
revert it to the official production image and tag: replace newName and newTag
with the original values used in main/production (retrieve exact values from the
main branch, deployment docs, or the canonical kustomization in the repo),
update the file accordingly, and commit with a note that the personal repository
reference was removed.
e1bd22b to
e44573f
Compare
|
Change to PR detected. A new PR build was completed. |
api/v1/dspipeline_types.go
Outdated
| // cleanup. When set, the ttlStrategy.secondsAfterCompletion field is applied to all pipeline runs. | ||
| // +kubebuilder:validation:Optional | ||
| // +kubebuilder:validation:Minimum=0 | ||
| ResourceTTL *int32 `json:"resourceTTL,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional: It might be nice to make this a duration string but seconds is okay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this idea, better UX. updated
api/v1/dspipeline_types.go
Outdated
| // cleanup. When set, the ttlStrategy.secondsAfterCompletion field is applied to all pipeline runs. | ||
| // +kubebuilder:validation:Optional | ||
| // +kubebuilder:validation:Minimum=0 | ||
| ResourceTTL *int32 `json:"resourceTTL,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this to be a pointer? I think 0 is equivalent to unset so we don't need to differentiate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So 0 is not equivalent to unset - setting secondsAfterCompletion to 0 in the workflow's TTLStrategy means it's resources are immediately cleaned up after completion, whereas unset means the resources will have indefinate TTL
controllers/dspipeline_params.go
Outdated
| p.ProxyConfig = dsp.Spec.Proxy | ||
|
|
||
| // Build compiled pipeline spec patch from DSPA fields | ||
| p.CompiledPipelineSpecPatch = p.buildCompiledPipelineSpecPatch() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional: It's a bit confusing that buildCompiledPipelineSpecPatch is a method that doesn't directly set p.CompiledPipelineSpecPatch. In other words, DSPAParms reads itself to derive the value but it can also cache the derived value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated to just use SetupCompiledPipelineSpecPatch() to directly modify p.CompiledPipelineSpecPatch.
|
Change to PR detected. A new PR build was completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
controllers/dspipeline_params.go (1)
553-580: Good implementation with clear extensibility plan.The patch builder correctly constructs the TTL strategy JSON and includes helpful comments for future enhancements. The logic properly returns an empty string when no patch fields are configured.
Optional: Consider logging marshal errors
Currently, if
json.Marshalfails on line 575, the error is silently discarded. While unlikely given the simple data structure, logging the error would aid debugging:patchJSON, err := json.Marshal(patch) if err != nil { + // This should never happen with our controlled data structure, but log just in case + // Note: We don't have access to logger here, consider adding it as a parameter + // or handling this error differently in ExtractParams return "" }Since the method doesn't have access to a logger and marshal failures are extremely unlikely here, this is purely a defensive programming suggestion.
controllers/testutil/util.go (1)
281-291: LGTM! Well-structured test helper for ResourceTTL.The
CreateDSPAWithResourceTTLfunction correctly configures a minimal DSPA suitable for testing the new ResourceTTL feature. The v2 setup with MLMD enabled and PodToPodTLS disabled is appropriate for test scenarios.📝 Optional: Consider adding a doc comment
While the function name is self-explanatory, adding a doc comment would align with Go best practices for exported functions:
+// CreateDSPAWithResourceTTL creates a DSPA configured for v2 with the specified ResourceTTL value. func CreateDSPAWithResourceTTL(ttl int32) *dspav1.DataSciencePipelinesApplication {Note: This is purely optional since other
Create*helpers in this file also lack doc comments.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
api/v1/dspipeline_types.goapi/v1/zz_generated.deepcopy.goconfig/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yamlconfig/internal/apiserver/default/deployment.yaml.tmplcontrollers/apiserver_test.gocontrollers/dspipeline_params.gocontrollers/dspipeline_params_test.gocontrollers/mlmd_test.gocontrollers/testdata/declarative/case_2/deploy/cr.yamlcontrollers/testdata/declarative/case_2/expected/created/apiserver_deployment.yamlcontrollers/testutil/util.gocontrollers/workflow_controller_test.go
🚧 Files skipped from review as they are similar to previous changes (6)
- controllers/apiserver_test.go
- api/v1/zz_generated.deepcopy.go
- controllers/workflow_controller_test.go
- controllers/testdata/declarative/case_2/deploy/cr.yaml
- config/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yaml
- controllers/mlmd_test.go
🧰 Additional context used
🧬 Code graph analysis (1)
controllers/testutil/util.go (1)
api/v1/dspipeline_types.go (3)
MLMD(319-326)APIServer(79-173)DSPASpec(25-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: dspo-tests
- GitHub Check: unittest
- GitHub Check: dspo-byoargo-tests
- GitHub Check: functest
🔇 Additional comments (7)
api/v1/dspipeline_types.go (1)
167-172: LGTM! Well-designed optional TTL field.The pointer semantics correctly distinguish between unset (nil, indefinite TTL) and zero (immediate cleanup). The validation and documentation are clear.
config/internal/apiserver/default/deployment.yaml.tmpl (1)
155-158: LGTM! Proper conditional environment variable injection.The patch is correctly emitted only when present, and the single-quote wrapping is appropriate for the JSON value.
controllers/testdata/declarative/case_2/expected/created/apiserver_deployment.yaml (1)
111-112: LGTM! Test expectation matches implementation.The expected JSON structure correctly represents the TTL strategy patch with the configured 3600-second value.
controllers/dspipeline_params_test.go (2)
320-402: LGTM! Comprehensive test coverage for patch building.The test cases cover all relevant scenarios: nil APIServer, unset ResourceTTL, and various TTL values including edge cases (0 for immediate cleanup, large values). The JSON validation approach is sound.
404-435: LGTM! Well-structured integration tests.The tests properly verify both the presence and absence of the patch through the full ExtractParams flow, ensuring the feature integrates correctly with the parameter extraction logic.
controllers/dspipeline_params.go (1)
639-640: LGTM! Patch generation integrated correctly.The compiled pipeline spec patch is built and assigned at the appropriate point in the parameter extraction flow, ensuring it's available for template rendering.
controllers/testutil/util.go (1)
217-217: LGTM! Clean refactor to export the bool pointer helper.The new exported
BoolPtrhelper is correctly implemented and all usages have been consistently updated throughout the file. This improves reusability across test packages.Also applies to: 257-257, 262-265, 271-271, 299-299
|
Change to PR detected. A new PR build was completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In
@config/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yaml:
- Around line 154-161: The resourceTTL field's regex pattern currently allows
out-of-order unit strings (e.g., "1s2h") which will break when parsed by
time.ParseDuration; update the pattern on resourceTTL (the pattern property) to
enforce descending unit order (hours then minutes then seconds) such as using an
optional h then optional m then optional s sequence, and if the API
intentionally restricts to only s/m/h (excluding ns/us/ms), update the
description text (resourceTTL description) to document this limitation.
🧹 Nitpick comments (1)
controllers/dspipeline_params.go (1)
559-565: Consider validating TTL duration bounds.The conversion from
Duration.Seconds()(float64) toint32truncates fractional seconds and could overflow for extremely large durations. While this is unlikely in practice, consider adding validation to ensure the value is positive and within a reasonable range.💡 Example validation logic
// Add TTL strategy if ResourceTTL is configured if p.APIServer != nil && p.APIServer.ResourceTTL != nil { + duration := p.APIServer.ResourceTTL.Duration + // Validate duration is positive and fits in int32 + if duration <= 0 { + p.CompiledPipelineSpecPatch = "" + return + } // Convert duration to seconds (Argo Workflows expects seconds) - seconds := int32(p.APIServer.ResourceTTL.Duration.Seconds()) + secondsFloat := duration.Seconds() + if secondsFloat > float64(^uint32(0)>>1) { + // Duration too large for int32 + p.CompiledPipelineSpecPatch = "" + return + } + seconds := int32(secondsFloat) patch["ttlStrategy"] = map[string]interface{}{ "secondsAfterCompletion": seconds, } }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
api/v1/dspipeline_types.goapi/v1/zz_generated.deepcopy.goconfig/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yamlconfig/internal/apiserver/default/deployment.yaml.tmplcontrollers/apiserver_test.gocontrollers/dspipeline_params.gocontrollers/dspipeline_params_test.gocontrollers/mlmd_test.gocontrollers/testdata/declarative/case_2/deploy/cr.yamlcontrollers/testdata/declarative/case_2/expected/created/apiserver_deployment.yamlcontrollers/testutil/util.gocontrollers/workflow_controller_test.go
🚧 Files skipped from review as they are similar to previous changes (5)
- config/internal/apiserver/default/deployment.yaml.tmpl
- controllers/workflow_controller_test.go
- controllers/testdata/declarative/case_2/expected/created/apiserver_deployment.yaml
- controllers/mlmd_test.go
- controllers/testdata/declarative/case_2/deploy/cr.yaml
🧰 Additional context used
🧬 Code graph analysis (3)
controllers/apiserver_test.go (1)
controllers/testutil/util.go (1)
BoolPtr(263-265)
controllers/dspipeline_params_test.go (4)
controllers/dspipeline_params.go (1)
DSPAParams(48-111)api/v1/dspipeline_types.go (1)
APIServer(79-175)controllers/dspipeline_fake_controller.go (1)
CreateNewTestObjects(65-67)controllers/testutil/util.go (2)
CreateEmptyDSPA(214-236)BoolPtr(263-265)
controllers/testutil/util.go (1)
api/v1/dspipeline_types.go (3)
MLMD(321-328)APIServer(79-175)DSPASpec(25-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: dspo-tests
- GitHub Check: functest
- GitHub Check: unittest
- GitHub Check: dspo-byoargo-tests
🔇 Additional comments (11)
controllers/apiserver_test.go (1)
24-24: LGTM! Clean refactor to shared helper.The migration from a local
boolPtrto the exportedtestutil.BoolPtrconsolidates the helper function in one location, improving maintainability across the test suite.Also applies to: 39-39, 137-137
api/v1/zz_generated.deepcopy.go (1)
66-70: LGTM! Correct deepcopy pattern for pointer field.The autogenerated deepcopy logic correctly handles the
ResourceTTLpointer by checking for nil, allocating a newmetav1.Duration, and copying the value. This follows the same pattern used for other pointer fields in the struct.api/v1/dspipeline_types.go (1)
167-174: LGTM! Well-defined field with appropriate validation.The
ResourceTTLfield is properly defined with:
- Clear documentation explaining its purpose and behavior
- Appropriate use of pointer type (distinguishes unset from 0s, as confirmed in past review discussions)
- Pattern validation for duration strings
- Correct kubebuilder markers for CRD generation
controllers/dspipeline_params_test.go (3)
321-417: LGTM! Comprehensive test coverage for TTL patch generation.The test covers multiple scenarios including:
- Edge cases (no TTL, nil APIServer)
- Zero duration (immediate cleanup)
- Various duration formats (hours, minutes)
- Proper JSON structure validation for
ttlStrategy.secondsAfterCompletionThe table-driven approach makes it easy to add additional test cases in the future.
419-438: LGTM! Good end-to-end validation of ResourceTTL extraction.The test verifies the complete flow from DSPA with ResourceTTL through parameter extraction to JSON patch generation, including proper duration-to-seconds conversion.
440-450: LGTM! Validates the no-TTL scenario.The test confirms that when
ResourceTTLis not set, theCompiledPipelineSpecPatchremains empty, ensuring the patch is only generated when explicitly configured.controllers/testutil/util.go (2)
217-217: LGTM! Proper export of helper function.Exporting
BoolPtrwith a clear docstring and updating all internal usages ensures consistent pointer creation across the test suite and enables reuse in other test packages.Also applies to: 257-257, 262-265, 271-271, 299-299
281-291: LGTM! Well-structured test helper for ResourceTTL.The
CreateDSPAWithResourceTTLfunction follows the established pattern for DSPA factory functions and provides appropriate defaults for testing the ResourceTTL feature in a DSPv2 context.controllers/dspipeline_params.go (2)
108-110: LGTM: Clear documentation and appropriate type.The new field is well-documented and using
stringfor the JSON patch is appropriate for passing to the API server as an environment variable.
643-644: Appropriate placement for patch setup.Calling
SetupCompiledPipelineSpecPatch()at this point ensures the patch is built after the APIServer configuration is copied but before it's used in downstream operations.config/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yaml (1)
155-159: Clear and helpful description.The description effectively explains the purpose of
resourceTTL, its impact on workflow resources, and provides practical examples. The mention ofttlStrategy.secondsAfterCompletionhelps users understand the underlying Argo Workflows mechanism.
.../bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yaml
Show resolved
Hide resolved
- Add an optional resourceTTL to DSPA.APIServer CRD - When configured, sets the number of seconds after a workflow execution that any resources created by a pipeline managed by the APIServer is cleaned up/deleted Signed-off-by: Giulio Frasca <[email protected]>
Signed-off-by: Giulio Frasca <[email protected]>
Signed-off-by: Giulio Frasca <[email protected]>
|
Change to PR detected. A new PR build was completed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @api/v1/dspipeline_types.go:
- Around line 167-174: The kubebuilder pattern on ResourceTTL currently allows
an empty string because all time components are optional; update the validation
so at least one time component (hours, minutes, or seconds) is required for
metav1.Duration by changing the Pattern to require at least one unit (h, m, or
s) using a positive lookahead or equivalent regex and/or add a
kubebuilder:validation:MinLength constraint so empty strings are rejected; apply
this change to the Pattern annotation on the ResourceTTL field.
🧹 Nitpick comments (1)
controllers/dspipeline_params.go (1)
561-561: Optional: Consider documenting precision loss for sub-second durations.The conversion
int32(p.APIServer.ResourceTTL.Duration.Seconds())truncates fractional seconds. For example, "1.5s" becomes 1 second. While the CRD validation pattern permits fractional durations (e.g.,1.5h), this is unlikely to be an issue for TTL use cases where sub-second precision is rarely needed.Consider adding a brief inline comment noting that fractional seconds are truncated if this behavior might be unexpected.
📝 Optional documentation improvement
// Add TTL strategy if ResourceTTL is configured if p.APIServer != nil && p.APIServer.ResourceTTL != nil { - // Convert duration to seconds (Argo Workflows expects seconds) + // Convert duration to seconds (Argo Workflows expects seconds; fractional seconds are truncated) seconds := int32(p.APIServer.ResourceTTL.Duration.Seconds()) patch["ttlStrategy"] = map[string]interface{}{ "secondsAfterCompletion": seconds, } }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
api/v1/dspipeline_types.goapi/v1/zz_generated.deepcopy.goconfig/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yamlconfig/internal/apiserver/default/deployment.yaml.tmplcontrollers/apiserver_test.gocontrollers/dspipeline_params.gocontrollers/dspipeline_params_test.gocontrollers/mlmd_test.gocontrollers/testdata/declarative/case_2/deploy/cr.yamlcontrollers/testdata/declarative/case_2/expected/created/apiserver_deployment.yamlcontrollers/testutil/util.gocontrollers/workflow_controller_test.go
🚧 Files skipped from review as they are similar to previous changes (6)
- controllers/apiserver_test.go
- config/internal/apiserver/default/deployment.yaml.tmpl
- controllers/testutil/util.go
- controllers/testdata/declarative/case_2/expected/created/apiserver_deployment.yaml
- config/crd/bases/datasciencepipelinesapplications.opendatahub.io_datasciencepipelinesapplications.yaml
- api/v1/zz_generated.deepcopy.go
🧰 Additional context used
🧬 Code graph analysis (4)
controllers/workflow_controller_test.go (1)
controllers/testutil/util.go (1)
BoolPtr(263-265)
controllers/mlmd_test.go (1)
controllers/testutil/util.go (1)
BoolPtr(263-265)
controllers/dspipeline_params.go (1)
api/v1/dspipeline_types.go (1)
APIServer(79-175)
controllers/dspipeline_params_test.go (4)
controllers/dspipeline_params.go (1)
DSPAParams(48-111)api/v1/dspipeline_types.go (1)
APIServer(79-175)controllers/dspipeline_fake_controller.go (1)
CreateNewTestObjects(65-67)controllers/testutil/util.go (3)
CreateDSPAWithResourceTTL(281-291)CreateEmptyDSPA(214-236)BoolPtr(263-265)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: dspo-tests
- GitHub Check: dspo-byoargo-tests
- GitHub Check: functest
- GitHub Check: unittest
🔇 Additional comments (8)
controllers/workflow_controller_test.go (1)
24-24: LGTM! Consistent use of testutil.BoolPtr.The refactoring to use
testutil.BoolPtrimproves test code consistency and eliminates duplication of the helper function across test files.Also applies to: 38-38, 137-137, 224-224, 280-280
controllers/testdata/declarative/case_2/deploy/cr.yaml (1)
17-17: LGTM! Valid test case for ResourceTTL.The addition of
resourceTTL: 1hprovides appropriate test coverage for the new TTL configuration field.controllers/mlmd_test.go (1)
26-26: LGTM! Refactoring to shared helper.Consistent migration to
testutil.BoolPtreliminates code duplication and aligns with the broader test refactoring effort.Also applies to: 44-44, 135-135, 178-178, 268-268, 340-340, 412-412
controllers/dspipeline_params_test.go (2)
322-418: Comprehensive test coverage for TTL patch generation.The test cases cover important scenarios including edge cases (0s, various durations) and nil handling. The logic correctly verifies JSON structure and field values.
Optional: Minor test structure suggestion
The test cases mix
expectedPatchandexpectedFields, where cases withexpectedPatch: ""fall through to the empty assertion. Consider simplifying by using onlyexpectedFields(set to nil for empty cases) to make the intent clearer:{ name: "no ResourceTTL set - empty patch", params: DSPAParams{ APIServer: &dspav1.APIServer{Deploy: true}, }, expectedFields: nil, // Explicitly nil instead of expectedPatch: "" },Then simplify the assertion logic:
if tc.expectedFields != nil { var actualFields map[string]interface{} err := json.Unmarshal([]byte(tc.params.CompiledPipelineSpecPatch), &actualFields) require.NoError(t, err) assert.Equal(t, tc.expectedFields, actualFields) } else { assert.Empty(t, tc.params.CompiledPipelineSpecPatch) }
420-451: LGTM! Integration tests validate end-to-end TTL behavior.Tests confirm that
ExtractParamscorrectly produces patches when ResourceTTL is set and remains empty otherwise. The assertions properly validate the JSON structure and expected values (3600 seconds for 1 hour).controllers/dspipeline_params.go (3)
107-110: LGTM! Clear field addition with good documentation.The new
CompiledPipelineSpecPatchfield is well-documented and appropriately scoped for holding JSON patches applied to compiled pipeline specs.
553-585: Excellent improvement on error handling.The marshaling error is now properly logged (line 580), which addresses the previous review concern about silent error swallowing. The fallback to an empty string is appropriate since it allows the operator to continue functioning while logging the issue for investigation.
646-647: LGTM! Proper invocation with correct logger.The method is invoked at the appropriate point in the parameter extraction flow, and the logger is correctly passed as an argument.
| expectedFields map[string]interface{} | ||
| }{ | ||
| { | ||
| name: "no ResourceTTL set - empty patch", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there be an additional test case where ResourceTTL is set with an empty string?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding an empty string for the ResourceTTL in a DSPA would be rejected by the kubernetes API, therefore there is not a good way to add a unit test to verify against this. I did add a new test to check "empty" duration, but the exact case isn't reachable from a user perspective
Signed-off-by: Giulio Frasca <[email protected]>
|
Change to PR detected. A new PR build was completed. |
Description of your changes:
Adds a new field,
resourceTTLto the APIServer item in DSPA CRDs. If set (not by default), DSPO will apply a workflow spec patch to all future execution of any PipelineRun which will clean up/delete resources such as Pods after n seconds, where n is the value provided to the resourceTTL field.Testing instructions
COMPILED_PIPELINE_SPEC_PATCHis presentCOMPILED_PIPELINES_SPEC_PATCHwith some simple JSON string and the ttl of 1234.COMPILED_PIPELINE_SPEC_PATCHis no longer present again.Checklist
Summary by CodeRabbit
New Features
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.