-
Notifications
You must be signed in to change notification settings - Fork 114
subservers: fail LiT startup when integrated sub-server boot fails #1183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @ffranr, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the reliability of the LiT startup sequence by elevating the status of integrated sub-servers to critical. Previously, LiT could continue operating even if some of its integrated sub-servers failed to launch. Now, any failure in an integrated sub-server's boot process will cause LiT itself to cease its startup, ensuring that the system only reaches an operational state when all essential integrated components are functioning correctly. This change provides a more robust and predictable initialization experience. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request modifies the startup logic to treat integrated sub-server startup failures as fatal. The changes correctly propagate errors from StartIntegratedServers up to the main start function, causing LiT to fail on startup as intended. The documentation has also been updated to reflect this new behavior. The implementation is sound, but I have one suggestion to improve the consistency of error handling.
57ee325 to
86e8ec5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this @ffranr 🙏!
In addition to the feedback I've commented below, this new behaviour definitely needs itest coverage.
I think we actually started working on this at the same time, and I have local branch with draft code implementing this + itest coverage. If you want to, i can clean that up and push it so that you can cherry-pick that to make it more simple for you. Let me know if that'd be helpful :).
ellemouton
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
havent actually looked at the diff here yet, just want to make a note in case: we should just make sure that the daemon still runs & ie, that the status server still gets served
1772099 to
0091958
Compare
0091958 to
381d0b1
Compare
ViktorT-11
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates 🚀! Leaving some additional feedback in addition to @ellemouton's feedback.
|
@ziggie1984: review reminder |
c910257 to
65e4e0f
Compare
ViktorT-11
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates 🙏! Looking better, but commenting of a few lines I think you should just remove to address some previously mentioned feedback.
- Fail LiT startup if any critical integrated sub-server fails to start. - Log non-critical sub-server startup errors without aborting startup.
- Ensure critical integrated sub-servers initialize first. - Introduce alphabetical sorting for consistent order across startup runs.
- Introduce tests for critical and non-critical sub-server startup behavior. - Ensure failures in critical servers stop startup, while non-critical failures are tolerated.
65e4e0f to
3368bcf
Compare
ViktorT-11
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After testing this PR locally, I’ve realized that this comment isn’t really addressable with how litd is currently designed:
#1183 (comment)
The reason is that shutting down lnd will always trigger the shutdownInterceptor.ShutdownChannel() to be sent over, which in turn also shuts down litd. This behavior stems from the single-interceptor pattern we currently use in litd, which we eventually want to move away from.
As a result, I don’t think it’s possible to keep litd’s status endpoint up and running while still shutting down lnd in response to critical sub-server startup errors.
Given that, I believe the behavior introduced by this PR is effectively the best we can do until the single-interceptor pattern is addressed.
@ellemouton, do you think it’s critical that litd, and therefore the status endpoint, remains up when critical sub-servers fail on startup (which also causes lnd to shut down)? If so, we’ll need to address the single-interceptor issue first. Otherwise, can we ship this as-is for now and tackle the interceptor issue at a later stage?
| // in integrated mode. | ||
| node, err := net.NewNode( | ||
| t.t, "FailFastTap", nil, false, false, | ||
| "--taproot-assets.rpclisten=127.0.0.1:65536", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this config flag will make the node error on the config validation logic in litd, and not when tapd is attempted to be started after the wallet is unlocked.
I suggest using these config options instead, as those will actually error during tapd's startup:
taproot-assets.databasebackend=postgres
taproot-assets.postgres.host=tapd-postgres.invalid
| case <-time.After(15 * time.Second): | ||
| t.Fatalf("expected tapd startup failure to be reported") | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depending on @ellemouton's response to my broader question in the main review comment, we likely want to expand the test code here, to ensure that the status endpoint & lnd node is at the state we expect it to be.
Closes #1181