Skip to content

Ensure --p2p-max-peers value is respected for current and next fork (if the next hard fork is close) #15862

@nalepae

Description

@nalepae

Currently, Prysm:

  1. ensures at least --p2p-max-peers peers, and
  2. ensures at least --minimum-peers-per-subnet peers per subnet.

One epoch before a hard fork, Prysm starts to subscribe to the subnets corresponding to the next fork, and ensures --minimum-peers-per-subnet for both the current and the next fork. ==> OK

However, Prysm does not try to have --p2p-max-peers in the next fork: It tries to have --p2p-max-peers in the current and the next fork combined.

As a consequence, in the worst case, during the epoch just before the fork, Prysm could have:

  • --p2p-max-peers - --minimum-peers-per-subnet peers in the current fork (64 with the default value), and only
  • --minimum-peers-per-subnet peers in the next fork (6 with the default value).

This issue was visible during the Holesky BPO fork #2, where a Prysm node had a majority of Lighthouse peers. (Lighthouse subscribes to the next fork only a few seconds before the fork itself.)

Image
  • Yellow curve: peers being subscribed to the old fork.
  • Green curve: peers being subscribed to the next fork. Starting low one epoch before the fork, then rising a few seconds before the fork.

Peer repartition:

Image

(Note here Prysm behaved totally according to the intent: Aka at least --p2p-max-peers in total, and at least --minimum-peers-per-subnet in all current or next subnets.)

Proposed solution:
One epoch before the fork, Prysm should:

  • ensure at least --minimum-peers-per-subnet peers per subnet in the current and the next fork (already done, OK).
  • ensure at least --p2p-max-peers in the current and the next fork (to be done).

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions