-
Notifications
You must be signed in to change notification settings - Fork 9
Description
Description
There is discussion and simulations around improving the overall message transmission via scheduling "first" messages being sent out on a mesh. See here: https://ethresear.ch/t/improving-das-performance-with-gossipsub-batch-publishing/21713
Motivation
When gossipsub publishes a message, it duplicates it to all mesh peers (and all peers if flood publish is enabled). If there are multiple messages to be sent, on the same topic, the second message will be added to each peer's queue and won't be sent until the peer has sent the first message. Just changing the ordering of messages in the queue, will enable at least one instance of each message to reach the network before sending the duplicates, allowing for faster overall message transmission.
An implementation of this is here: libp2p#5924
Sending across multiple topics is harder to reason about, because each topic has its own mesh. Sending multiple messages to each topic, may not have overlapping peer sets and so ordering messages is not easily done (at least in our implementation).
One general way, that I think could be beneficial, is to use the ordering ability in this proposed async-channel implementation: #570.
For every message we publish, when duplicating the message for each mesh peer, we randomly select one peer and tag the message to that peer as a "first" message or priority message. This message takes a priority precedence in that peers queue (not above control messages however).
I'm thinking if we do this for each published message regardless of topic and mesh, then we have sense of priority for one message being published above all the other duplicates.
Each peer could have multiple priority messages, but it will try to send those before it sends "duplicates".
I have no idea if this helps with propagation but I suspect it aligns a little with what is mentioned in the blog post referenced at the start of this issue.
cc @jxs @elenaf9 @cskiraly @jimmygchen
Current Implementation
--
Are you planning to do it yourself in a pull request?
Maybe