maintainer: avoid panic when maintainer bootstrap#4518
maintainer: avoid panic when maintainer bootstrap#4518wk989898 wants to merge 27 commits intopingcap:masterfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplace panics in maintainer bootstrap timestamp detection with structured error returns and propagate errors; add unit test(s) (one duplicated) asserting error propagation; add an integration test script exercising bootstrap-retry-after-error and register it in CI. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the maintainer bootstrap process by replacing a potential panic with explicit error handling. Previously, if critical timestamp information was missing during bootstrap, the application would panic. The changes refactor the relevant functions to return an error in such scenarios, improving the system's stability and allowing for more controlled error recovery. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request refactors the determineStartTs function in maintainer_controller_bootstrap.go to return an error instead of panicking when startTs or redoStartTs are not found. This change replaces log.Panic with errors.WrapError for more graceful error handling. The FinishBootstrap function was updated to handle this new error return. Additionally, a new test case, TestFinishBootstrapReturnsErrorWhenCheckpointMissing, was added to maintainer_controller_test.go to validate the new error-handling behavior.
| startTs, redoStartTs, err := c.determineStartTs(allNodesResp) | ||
| if err != nil { | ||
| return nil, errors.Trace(err) | ||
| } |
There was a problem hiding this comment.
| if startTs == 0 { | ||
| log.Panic("cant not found the startTs from the bootstrap response", | ||
| zap.String("changefeed", c.changefeedID.Name())) | ||
| return 0, 0, errors.WrapError( | ||
| errors.ErrChangefeedInitTableTriggerDispatcherFailed, | ||
| errors.New("all bootstrap responses reported empty checkpointTs"), | ||
| ) |
There was a problem hiding this comment.
| func TestFinishBootstrapReturnsErrorWhenCheckpointMissing(t *testing.T) { | ||
| testutil.SetUpTestServices() | ||
| nodeManager := appcontext.GetService[*watcher.NodeManager](watcher.NodeManagerName) | ||
| nodeManager.GetAliveNodes()["node1"] = &node.Info{ID: "node1"} | ||
|
|
||
| tableTriggerEventDispatcherID := common.NewDispatcherID() | ||
| cfID := common.NewChangeFeedIDWithName("test", common.DefaultKeyspaceName) | ||
| ddlSpan := replica.NewWorkingSpanReplication(cfID, tableTriggerEventDispatcherID, | ||
| common.DDLSpanSchemaID, | ||
| common.KeyspaceDDLSpan(common.DefaultKeyspaceID), &heartbeatpb.TableSpanStatus{ | ||
| ID: tableTriggerEventDispatcherID.ToPB(), | ||
| ComponentStatus: heartbeatpb.ComponentState_Working, | ||
| CheckpointTs: 1, | ||
| }, "node1", false) | ||
| refresher := replica.NewRegionCountRefresher(cfID, time.Minute) | ||
| controller := NewController(cfID, 1, &mockThreadPool{}, | ||
| config.GetDefaultReplicaConfig(), ddlSpan, nil, 1000, 0, refresher, common.DefaultKeyspace, false) | ||
|
|
||
| postBootstrapRequest, err := controller.FinishBootstrap(map[node.ID]*heartbeatpb.MaintainerBootstrapResponse{ | ||
| "node1": { | ||
| ChangefeedID: cfID.ToPB(), | ||
| }, | ||
| }, false) | ||
| require.Nil(t, postBootstrapRequest) | ||
| require.Error(t, err) | ||
| code, ok := cerrors.RFCCode(err) | ||
| require.True(t, ok) | ||
| require.Equal(t, cerrors.ErrChangefeedInitTableTriggerDispatcherFailed.RFCCode(), code) | ||
| require.Contains(t, err.Error(), "all bootstrap responses reported empty checkpointTs") | ||
| require.False(t, controller.bootstrapped) |
There was a problem hiding this comment.
This new test case TestFinishBootstrapReturnsErrorWhenCheckpointMissing is well-designed. It specifically targets the scenario where checkpointTs is missing, ensuring that the new error handling in determineStartTs functions as expected. Verifying the error type, code, and message is crucial for robust error management.
|
/test mysql |
|
/test pull-cdc-mysql-integration-light |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration_tests/bootstrap_retry_after_error/run.sh`:
- Around line 37-45: The negative log check in check_cdc_logs_not_contains can
falsely pass when the glob cdc*.log matches nothing; modify the function
(check_cdc_logs_not_contains) to first verify that at least one CDC log file
exists (e.g., use compgen -G "$work_dir/cdc*.log" or an explicit file-list
check) and fail fast with a clear error if none are found, then run grep against
the confirmed file list (so the subsequent grep -Eqs and grep -Ens are executed
only when files are present).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 756d7ee8-6050-40d4-9d00-f604a8a6eb83
📒 Files selected for processing (1)
tests/integration_tests/bootstrap_retry_after_error/run.sh
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
maintainer/maintainer_controller_bootstrap.go (1)
178-187: Use a bootstrap-specific RFC error instead of dispatcher-init errorThese branches return
ErrChangefeedInitTableTriggerDispatcherFailed, but the failure is “missing bootstrap checkpoint ts,” not dispatcher initialization. A dedicated error code/message would improve alert routing and triage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@maintainer/maintainer_controller_bootstrap.go` around lines 178 - 187, The branches that return errors for missing bootstrap checkpoint timestamps currently use errors.ErrChangefeedInitTableTriggerDispatcherFailed which is misleading; define a bootstrap-specific error constant (e.g. errors.ErrChangefeedInitBootstrapMissingCheckpoint) and replace the usages in both branches (the one returning for empty checkpointTs and the one guarded by c.enableRedo when redoStartTs == 0) so they wrap the new bootstrap-specific error with the existing descriptive messages (e.g. "all bootstrap responses reported empty checkpointTs" and "all bootstrap responses reported empty redoCheckpointTs"). Ensure the new constant is declared alongside other changefeed errors and used in the errors.WrapError calls instead of errors.ErrChangefeedInitTableTriggerDispatcherFailed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@maintainer/maintainer_controller_bootstrap.go`:
- Around line 91-96: The current bootstrap code calls log.Panic when
c.determineStartTs(allNodesResp) fails and then returns errors.Trace(err);
remove the panic and instead log the failure with log.Error (including
zap.String("changefeed", c.changefeedID.Name())) and return the original err
value (not re-wrapped) so the caller can propagate the already-wrapped error
from determineStartTs; update the failing block around c.determineStartTs to
perform a non-panicking log and return nil, err.
---
Nitpick comments:
In `@maintainer/maintainer_controller_bootstrap.go`:
- Around line 178-187: The branches that return errors for missing bootstrap
checkpoint timestamps currently use
errors.ErrChangefeedInitTableTriggerDispatcherFailed which is misleading; define
a bootstrap-specific error constant (e.g.
errors.ErrChangefeedInitBootstrapMissingCheckpoint) and replace the usages in
both branches (the one returning for empty checkpointTs and the one guarded by
c.enableRedo when redoStartTs == 0) so they wrap the new bootstrap-specific
error with the existing descriptive messages (e.g. "all bootstrap responses
reported empty checkpointTs" and "all bootstrap responses reported empty
redoCheckpointTs"). Ensure the new constant is declared alongside other
changefeed errors and used in the errors.WrapError calls instead of
errors.ErrChangefeedInitTableTriggerDispatcherFailed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2dc19bf5-5a20-4c46-985f-01f8e0528bbd
📒 Files selected for processing (2)
maintainer/maintainer_controller_bootstrap.gotests/integration_tests/bootstrap_retry_after_error/run.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/integration_tests/bootstrap_retry_after_error/run.sh
| startTs, redoStartTs, err := c.determineStartTs(allNodesResp) | ||
| if err != nil { | ||
| log.Panic("cant not found the startTs from the bootstrap response", | ||
| zap.String("changefeed", c.changefeedID.Name())) | ||
| return nil, errors.Trace(err) | ||
| } |
There was a problem hiding this comment.
Remove panic on bootstrap start-ts resolution failure
Line 93 still panics, so the maintainer can crash in the exact failure path this PR is trying to harden. Return the error instead of panicking, and propagate it without re-wrapping at Line 95.
✅ Suggested fix
startTs, redoStartTs, err := c.determineStartTs(allNodesResp)
if err != nil {
- log.Panic("cant not found the startTs from the bootstrap response",
- zap.String("changefeed", c.changefeedID.Name()))
- return nil, errors.Trace(err)
+ log.Error("cannot determine start ts from bootstrap response",
+ zap.Stringer("changefeed", c.changefeedID),
+ zap.Error(err))
+ return nil, err
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| startTs, redoStartTs, err := c.determineStartTs(allNodesResp) | |
| if err != nil { | |
| log.Panic("cant not found the startTs from the bootstrap response", | |
| zap.String("changefeed", c.changefeedID.Name())) | |
| return nil, errors.Trace(err) | |
| } | |
| startTs, redoStartTs, err := c.determineStartTs(allNodesResp) | |
| if err != nil { | |
| log.Error("cannot determine start ts from bootstrap response", | |
| zap.Stringer("changefeed", c.changefeedID), | |
| zap.Error(err)) | |
| return nil, err | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@maintainer/maintainer_controller_bootstrap.go` around lines 91 - 96, The
current bootstrap code calls log.Panic when c.determineStartTs(allNodesResp)
fails and then returns errors.Trace(err); remove the panic and instead log the
failure with log.Error (including zap.String("changefeed",
c.changefeedID.Name())) and return the original err value (not re-wrapped) so
the caller can propagate the already-wrapped error from determineStartTs; update
the failing block around c.determineStartTs to perform a non-panicking log and
return nil, err.
|
/test pull-cdc-mysql-integration-light |
1 similar comment
|
/test pull-cdc-mysql-integration-light |
|
/test pull-cdc-mysql-integration-light |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration_tests/bootstrap_retry_after_error/run.sh`:
- Around line 94-96: There is an extra blank line after the line that resets the
environment variable export GO_FAILPOINTS='' in the script; remove the redundant
empty line so the export is immediately followed by the next non-empty line
(collapse the double blank line) to satisfy shfmt and fix the formatting check
in tests/integration_tests/bootstrap_retry_after_error/run.sh.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c378f080-08bb-4ccd-b92b-e675982a4fc2
📒 Files selected for processing (1)
tests/integration_tests/bootstrap_retry_after_error/run.sh
|
/test pull-cdc-mysql-integration-light |
|
/test pull-cdc-mysql-integration-light |
1 similar comment
|
/test pull-cdc-mysql-integration-light |
|
/test pull-cdc-mysql-integration-light |
|
/test pull-cdc-mysql-integration-light |
|
/test pull-cdc-mysql-integration-light |
|
/test pull-cdc-mysql-integration-light |
|
/test pull-cdc-mysql-integration-light |
3 similar comments
|
/test pull-cdc-mysql-integration-light |
|
/test pull-cdc-mysql-integration-light |
|
/test pull-cdc-mysql-integration-light |
|
/test all |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: asddongmen, hongyunyan The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[LGTM Timeline notifier]Timeline:
|
|
/retest |
|
@wk989898: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What problem does this PR solve?
Issue Number: close #4509
What is changed and how it works?
When FinishBootstrap fails, TiCDC should be able to retry bootstrap with a complete set of cached bootstrap responses.
Before this change, bootstrap responses were effectively consumed once collected. If bootstrap failed and a new node joined later, the next retry round could end up seeing only the newly joined node's response, instead of reusing the responses from existing nodes as well. This made the retry path incomplete.
This PR fixes that behavior by retaining cached bootstrap responses until the higher-level bootstrap process succeeds and explicitly clears them.
It also adds integration coverage for the retry path triggered by node scheduling after an initial bootstrap failure.
Check List
Tests
Questions
Will it cause performance regression or break compatibility?
Do you need to update user documentation, design documentation or monitoring documentation?
Release note
Summary by CodeRabbit
Bug Fixes
Tests