Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #12613 +/- ##
==========================================
Coverage 75.05% 75.06%
- Complexity 6449 6500 +51
==========================================
Files 375 376 +1
Lines 24937 25057 +120
Branches 3214 3261 +47
==========================================
+ Hits 18716 18808 +92
- Misses 4896 4906 +10
- Partials 1325 1343 +18
🚀 New features to boost your workflow:
|
|
/gha run pipeline=regression |
|
⏳ System test verification started: link The following 6 job(s) will be executed:
Tests will start after successful build completion. |
|
🎉 System test verification passed: link |
|
/gha run pipeline=regression |
|
⏳ System test verification started: link The following 6 job(s) will be executed:
Tests will start after successful build completion. |
|
🎉 System test verification passed: link |
|
/gha run pipeline=regression kubeVersion=1.30 |
|
⏳ System test verification started: link The following 6 job(s) will be executed:
Tests will start after successful build completion. |
|
/gha run pipeline=regression kubeVersion=1.30.14 |
|
⏳ System test verification started: link The following 6 job(s) will be executed:
Tests will start after successful build completion. |
|
/gha run pipeline=regression kubeVersion=oldest |
|
⏳ System test verification started: link The following 6 job(s) will be executed:
Tests will start after successful build completion. |
|
🎉 System test verification passed: link |
|
❌ System test verification failed: link |
|
🎉 System test verification passed: link |
Signed-off-by: Jakub Scholz <www@scholzj.com>
edddc2c to
bcbbf77
Compare
|
/gha run pipeline=regression kubeVersion=oldest |
|
⏳ System test verification started: link The following 6 job(s) will be executed:
Tests will start after successful build completion. |
|
❌ System test verification failed: link |
|
Note: The ST failures are not related to this PR (see https://cloud-native.slack.com/archives/C018247K8T0/p1775859930375999). In any case, the ST added for this feature was correctly skipped. |
|
❌ System test verification failed: link |
PaulRMellor
left a comment
There was a problem hiding this comment.
Thanks. Doc updates look good.
A few minor suggestions
im-konge
left a comment
There was a problem hiding this comment.
LGTM, thanks. I have just one nit and one question.
|
|
||
| // Create dedicated controller and broker KafkaNodePools and Kafka CR | ||
| KubeResourceManager.get().createResourceWithWait( | ||
| KafkaNodePoolTemplates.mixedPoolPersistentStorage(testStorage.getNamespaceName(), testStorage.getMixedPoolName(), testStorage.getClusterName(), 3) |
There was a problem hiding this comment.
Just a question - would it matter if we try it in separate mode (separate roles) and mixed mode? Or it doesn't matter? Should we then add some tests for KafkaConnect etc.?
And thanks for writing the ST :)
There was a problem hiding this comment.
In general, the logic is the same everywhere, and we have unit tests to check that it is plugged. So I do not think we really care about the different roles or Connect that much. Maybe a Connect test might be marginally useful. I will open an issue for tracking and get to it later (I guess nobody ever did help-wanted issues STs). But I do not think we need to have this in this PR.
There was a problem hiding this comment.
Another useful test might be the deferred waiting ... but TBH, that is much harder to emulate in a stable way. So not sure that is worth it not because of what it does but the complexity and stability issues. You would need to:
- Deploy some dummy blocker application with resource comsumption blocking the required amount of your Kubernetes resources
- Resize the Kafka nodes to use more resources then available but less than the total capacity
- Check it waits
- Delete the blocker application
- Check the resizing completes
But with the Kubernetes nodes having different sizes etc., this is IMHO hard to automate in a way that works everywhere.
There was a problem hiding this comment.
But I do not think we need to have this in this PR.
Yeah I agree it's not crucial for this PR, this ST is enough, thanks for that. It was more like an idea what can we write for the feature to have it covered.
But with the Kubernetes nodes having different sizes etc., this is IMHO hard to automate in a way that works everywhere.
Yes, we had the same issue with the Drain Cleaner tests with provisioning the Pods on different AZs on AWS and other platforms (as it worked differently on each platform). So I guess this is a good manual test. Thanks
Co-authored-by: PaulRMellor <47596553+PaulRMellor@users.noreply.github.com> Signed-off-by: Jakub Scholz <www@scholzj.com>
Signed-off-by: Jakub Scholz <www@scholzj.com>
|
/gha run pipeline=regression |
|
⏳ System test verification started: link The following 6 job(s) will be executed:
Tests will start after successful build completion. |
|
❌ System test verification failed: link |
|
🎉 System test verification passed: link |
Type of change
Description
This PR implements the proposal SEP-131 and adds support for in-place pod resizing. It is based on StrimziPodSets, so it is supported in Kafka, Connect, and MM2 nodes. It is not supported in Bridge or any of the support deployments (UO, TO, CC, ...). As described in the proposal, this is an opt-in feature which users have to activate.
Checklist