Skip to content

fix: seed peer connections kept open#7687

Merged
brianp merged 2 commits intotari-project:developmentfrom
martinserts:close-seed-nodes
Feb 27, 2026
Merged

fix: seed peer connections kept open#7687
brianp merged 2 commits intotari-project:developmentfrom
martinserts:close-seed-nodes

Conversation

@martinserts
Copy link
Contributor

@martinserts martinserts commented Feb 25, 2026

Description

Bugs fixed:

  • incorrect identification of idle peers (substream_count() > 2 -> substream_count() < 3)
  • ConnectivityManager was using random_pool_refresh_interval (default 2 hrs) instead of intended update_interval (default 2 minutes) to fire cleanup logic
  • broken seed detection - get_seed_peers checked exact equality (flags == SEED). If a peer had multiple flags (SEED | COMPACT), it would be ignored.

New features / changes:

  • Seed nodes are now forcibly disconnected after a grace period, provided the node has sufficient other connections (> min_connectivity)
  • Added config entry max_seed_peer_age (default 15 minutes)
  • update_peer_sql will persist flags and features as well

Closes #7558

Motivation and Context

It was noticed, that seed connections where kept open for a long time.

How Has This Been Tested?

Added a test.

And tested manually:

>> list-connections

Base Nodes
----------

NodeId                     | Public Key                                                       | Address                                                                | Direction | Age     | User Agent                | Seed | Info                                  
-------------------------- | ---------------------------------------------------------------- | ---------------------------------------------------------------------- | --------- | ------- | ------------------------- | ---- | ------------------------------------- 
ace05844c9c9723a8597682be3 | 1e08628960f75b7e324f010b2ee609a9e28097e9101f4d769d474a38b6ee2d76 | /ip4/51.83.102.25/tcp/18189                                            | Outbound  | 12m 33s | tari/basenode/5.3.0-pre.1 | SEED | height: 509627, hnd: 2, ss: 2, rpc: 0 
1918864cd464060b70f5158f3e | 6294d74a49720ce37f847d1da69259ac3153b66d40c319a6d44a0982245db51f | /onion3/jimpqrodmtzize3w4wilju6w6nx77ditwp5677lj5mhxlrd6v67s4qqd:18141 | Outbound  | 6m 27s  | tari/basenode/5.3.0-pre.1 |      | height: 509627, hnd: 3, ss: 2, rpc: 0 
e0e3f3a847fbf71baf30cfc79d | 7a079f2b813ddbf5e59403c80200af4137ddd8b0d3f017e2ec6b8fa2fb4f397e | /onion3/bwagn3fa7any5uig2brzbxbk7uxgf5hp2wfbpkmja2pqz37k2zn252qd:18141 | Outbound  | 12m 29s | tari/basenode/5.1.0       |      | height: 509627, hnd: 3, ss: 2, rpc: 0 
52476d7371e2d3cfdc6aa9fa8e | 6001c31cc69b7a4211bdde8a73b88a467fa6a285b2d69631b35747b2df6c7704 | /onion3/3lklntggyxiwi6b5sjwwiwkvzhsiljqyzripveh2uw5lvqarww6lyyad:18141 | Outbound  | 12m 21s | tari/basenode/5.2.0-pre.0 |      | height: 509627, hnd: 3, ss: 2, rpc: 0 
ab2fad12f448628436bcbe98d1 | 4cdfb70e0b38b60c6a3573b2870e32bc3d846419c606ea379f43650b80f38409 | /ip4/51.83.4.85/tcp/18189                                              | Outbound  | 12m 18s | tari/basenode/5.3.0-pre.1 | SEED | height: 509627, hnd: 3, ss: 2, rpc: 0 
bb8da2644a856e5d5f8a7db05c | 2ca6775cde640eff6f433c2bbdda8846e299b667317434996f388630c3b8ae04 | /onion3/a4civpppxeoryjshkezvpi6j2zj4f3pplxjxj55uzlnxaqqmr6svcsad:18141 | Outbound  | 12m 29s | tari/basenode/5.2.1-pre.2 |      | height: 509627, hnd: 3, ss: 2, rpc: 0 
d4ffef3e484b36bac72c46064a | 4e3cf7e6509d41accbed3a1098c2abf601a6b23098a03ffd992eb047e1566561 | /onion3/ngjw3rcuvpfpyg64qyqhlvjitrzq2y3r6hucrk5yqwk6uhkekqnbhead:18141 | Outbound  | 10m 12s | tari/basenode/5.2.1-pre.2 |      | height: 509627, hnd: 3, ss: 2, rpc: 0 
7 active connection(s)

Wallets
-------
No active wallet connections.
>> 

when 15 minutes elapsed

>> list-connections

Base Nodes
----------

NodeId                     | Public Key                                                       | Address                                                                | Direction | Age     | User Agent                | Seed | Info                                  
-------------------------- | ---------------------------------------------------------------- | ---------------------------------------------------------------------- | --------- | ------- | ------------------------- | ---- | ------------------------------------- 
1918864cd464060b70f5158f3e | 6294d74a49720ce37f847d1da69259ac3153b66d40c319a6d44a0982245db51f | /onion3/jimpqrodmtzize3w4wilju6w6nx77ditwp5677lj5mhxlrd6v67s4qqd:18141 | Outbound  | 11m 41s | tari/basenode/5.3.0-pre.1 |      | height: 509629, hnd: 3, ss: 2, rpc: 0 
bb8da2644a856e5d5f8a7db05c | 2ca6775cde640eff6f433c2bbdda8846e299b667317434996f388630c3b8ae04 | /onion3/a4civpppxeoryjshkezvpi6j2zj4f3pplxjxj55uzlnxaqqmr6svcsad:18141 | Outbound  | 17m 42s | tari/basenode/5.2.1-pre.2 |      | height: 509629, hnd: 3, ss: 2, rpc: 0 
d4ffef3e484b36bac72c46064a | 4e3cf7e6509d41accbed3a1098c2abf601a6b23098a03ffd992eb047e1566561 | /onion3/ngjw3rcuvpfpyg64qyqhlvjitrzq2y3r6hucrk5yqwk6uhkekqnbhead:18141 | Outbound  | 15m 25s | tari/basenode/5.2.1-pre.2 |      | height: 509629, hnd: 3, ss: 2, rpc: 0 
e0e3f3a847fbf71baf30cfc79d | 7a079f2b813ddbf5e59403c80200af4137ddd8b0d3f017e2ec6b8fa2fb4f397e | /onion3/bwagn3fa7any5uig2brzbxbk7uxgf5hp2wfbpkmja2pqz37k2zn252qd:18141 | Outbound  | 17m 42s | tari/basenode/5.1.0       |      | height: 509629, hnd: 3, ss: 2, rpc: 0 
ace05844c9c9723a8597682be3 | 1e08628960f75b7e324f010b2ee609a9e28097e9101f4d769d474a38b6ee2d76 | /ip4/51.83.102.25/tcp/18189                                            | Outbound  | 1m 47s  | tari/basenode/5.3.0-pre.1 | SEED | height: 509629, hnd: 3, ss: 2, rpc: 0 
52476d7371e2d3cfdc6aa9fa8e | 6001c31cc69b7a4211bdde8a73b88a467fa6a285b2d69631b35747b2df6c7704 | /onion3/3lklntggyxiwi6b5sjwwiwkvzhsiljqyzripveh2uw5lvqarww6lyyad:18141 | Outbound  | 17m 34s | tari/basenode/5.2.0-pre.0 |      | height: 509629, hnd: 3, ss: 2, rpc: 0 
ab2fad12f448628436bcbe98d1 | 4cdfb70e0b38b60c6a3573b2870e32bc3d846419c606ea379f43650b80f38409 | /ip4/51.83.4.85/tcp/18189                                              | Outbound  | 1m 47s  | tari/basenode/5.3.0-pre.1 | SEED | height: 509629, hnd: 3, ss: 2, rpc: 0 
7 active connection(s)

Wallets
-------
No active wallet connections.
>> 

What process can a PR reviewer use to test or verify this change?

Breaking Changes

  • None
  • Requires data directory on base node to be deleted
  • Requires hard fork
  • Other - Please specify

@martinserts martinserts self-assigned this Feb 25, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @martinserts, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses several issues related to peer connectivity management, particularly concerning seed nodes. It introduces a new feature to automatically disconnect long-lived seed peer connections to improve network health and resource utilization. Additionally, it rectifies bugs in idle peer identification and seed peer database queries, ensuring more accurate peer management. The changes also include updates to configuration and persistence mechanisms to support these improvements.

Highlights

  • Seed Peer Disconnection: Implemented a mechanism to automatically disconnect seed peers that have been connected for longer than a configurable max_seed_peer_age (default 15 minutes), provided the node maintains sufficient other connections.
  • Corrected Idle Peer Identification: Fixed a bug where idle peers were incorrectly identified by changing the substream count condition from > 2 to < 3.
  • Improved Seed Peer Query Logic: Updated the database query for retrieving seed peers to correctly identify peers with the SEED flag, even if they possess other flags, using a bitwise AND operation.
  • Connectivity Manager Refresh Interval Fix: Corrected the ConnectivityManager to use the intended update_interval (default 2 minutes) instead of random_pool_refresh_interval for its cleanup logic.
  • Peer Flags and Features Persistence: Enabled the persistence of peer flags and features in the peer database during updates.
Changelog
  • base_layer/p2p/src/config.rs
    • Added max_seed_peer_age configuration option to P2pConfig.
    • Set the default value for max_seed_peer_age to 15 minutes.
  • base_layer/p2p/src/initialization.rs
    • Updated ConnectivityManager initialization to use update_interval for refresh and pass max_seed_peer_age.
  • comms/core/src/builder/mod.rs
    • Added a builder method with_max_seed_peer_age for CommsBuilder.
  • comms/core/src/connectivity/config.rs
    • Introduced max_seed_peer_age field to ConnectivityConfig.
    • Set the default value for max_seed_peer_age to 15 minutes.
  • comms/core/src/connectivity/connection_pool.rs
    • Corrected the condition for identifying inactive outbound connections from substream_count() > 2 to substream_count() < 3.
  • comms/core/src/connectivity/manager.rs
    • Implemented disconnect_seed_peers to forcibly close old seed connections.
    • Integrated disconnect_seed_peers into the on_tick lifecycle.
    • Refined proactive dialing logic to conditionally exclude seed peers based on current connectivity.
    • Added refresh_seeds_list to update the internal list of seed peers.
  • comms/core/src/connectivity/test.rs
    • Added a new test case seed_peer_release to validate the automatic disconnection of aged seed peers.
    • Imported PeerFlags for the new test.
  • comms/core/src/peer_manager/storage/database.rs
    • Modified update_peer_sql to persist flags and features in the database.
    • Updated get_seed_peers query to use a bitwise AND for accurate seed peer identification.
    • Added flags and features fields to UpdatePeerSql struct.
Activity
  • The author, martinserts, created this pull request to fix issues with seed peer connections.
  • The pull request includes a detailed description of bugs fixed, new features, motivation, and testing steps.
  • Manual testing results using list-connections are provided, demonstrating seed peers being disconnected after 15 minutes.
  • A new test case seed_peer_release was added to verify the functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@martinserts martinserts changed the title fix: Seed peer connections kept open fix: seed peer connections kept open Feb 25, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several important fixes and a new feature for managing seed peer connections. It corrects the logic for identifying idle peers, fixes an issue where the wrong cleanup interval was used by the ConnectivityManager, and resolves a bug in seed peer detection that failed with combined flags. A new feature is added to forcibly disconnect seed peers after a configurable grace period (max_seed_peer_age), provided minimum connectivity is maintained. Additionally, peer flags and features are now persisted to the database. The changes are well-implemented and include a new test for the seed peer disconnection logic. I have one suggestion to improve code readability, which is a minor issue.

Copy link
Collaborator

@brianp brianp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. The seed peer disconnection logic is solid, the min_connectivity guard prevents over-disconnection and the proactive dialer re-includes seeds when connectivity drops below threshold. Nice test coverage for the happy path too.

I've got three clarification questions inline. None of them are blockers, just want to make sure I'm reading the intent right.

user_agent: self.user_agent.clone(),
})
.with_connection_pool_refresh_interval(config.dht.connectivity.random_pool_refresh_interval)
.with_connection_pool_refresh_interval(config.dht.connectivity.update_interval)
Copy link
Collaborator

@brianp brianp Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This swaps random_pool_refresh_interval (default 2h) for update_interval (default 2min). That's a 60x increase in how often the connection pool refresh runs, which now includes seed disconnection, inactive reaping, and proactive dialing.

Was this intentional? If the goal was to make seed cleanup happen more often, it might be worth a dedicated timer for that instead of cranking the whole pool maintenance cycle. If it is intentional, probably worth a note in the changelog since it'll change behavior for anyone running default configs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was intentional.
Binding random_pool_refresh_interval to 2h I think is not correct.

Internal default is 60 seconds!
See comms/core/src/connectivity/config.rs

connection_pool_refresh_interval: Duration::from_secs(60)

If node lost connection, it would wait 2 hours before proactive dialling kicks in. This restores the responsiveness (as it was initially intended I guess).

I am looking for changelog, but it seems changelogs are auto-generated here.

pub fn get_inactive_outbound_connections_mut(&mut self, min_age: Duration) -> Vec<&mut PeerConnection> {
self.filter_connections_mut(|conn| {
conn.age() > min_age && conn.handle_count() <= 1 && conn.substream_count() > 2
conn.age() > min_age && conn.handle_count() <= 1 && conn.substream_count() < 3
Copy link
Collaborator

@brianp brianp Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This flips the condition from substream_count() > 2 to substream_count() < 3. These aren't equivalent, the old version matched connections with 3+ substreams, the new one matches connections with 0-2 substreams.

I'm assuming this is a bug fix? The function name says "inactive" connections, and fewer substreams = more inactive makes sense. Just want to confirm the old logic was wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, a bug fix.
This feature was from this PR - #4607

And it says:

only reap connections that have less than 3 substreams

So it was a bug.

.inner_join(multi_addresses::table.on(multi_addresses::peer_id.eq(peers::peer_id)))
.filter(peers::flags.eq(PeerFlags::SEED.to_i32()))
.filter(diesel::dsl::sql::<diesel::sql_types::Bool>(&format!(
"flags & {} != 0",
Copy link
Collaborator

@brianp brianp Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good fix, using bitwise AND instead of exact equality means peers with SEED | OTHER_FLAG won't get missed. Since PeerFlags::SEED.to_i32() is a compile-time constant (0x01), the format string is safe here.

Minor thought: if Diesel ever adds a bitwise AND expression, it'd be nice to move off raw SQL. Not a big deal for now though.

@brianp brianp merged commit 01b107f into tari-project:development Feb 27, 2026
18 of 20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Seed peer connections kept open

2 participants