Replies: 1 comment
-
|
Hi! Consider this code from rust-libp2p/protocols/request-response/src/handler.rs Lines 188 to 199 in 5e3519f which means, the response is read after the request has been written(sent) completely. For very large requests, depending on the bandwidth between peers, it can take tens of seconds to complete. The same is for the receiving end where the response is written after the request has been read completely: rust-libp2p/protocols/request-response/src/handler.rs Lines 136 to 159 in 5e3519f The most straightforward way of solving this is by chunking your request to a smaller size, for example a few MB as you mentioned. For streaming large amount of data, consider using libp2p-stream. You can compose request-response together with stream as one single NetworkBehaviour, and track requests, responses, streaming progress, etc there. Use request-response for control message exchange, and let stream do the heavy-lifting. You may use two kinds of ACK message, one for "the request has been received and the transfer has started", and another for "transfer completed", the former should come much quicker than the latter.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I’m currently working on a libp2p (Rust) application where I need to transfer relatively large application-level messages (up to ~500 MB) between two peers.
At the moment, I’m using:
request-responsebehaviourThe motivation for sticking with
request-responseis that it fits well with my application-level semantics (explicit request, explicit response/ACK, timeouts, peer targeting).One hard requirement I currently have is the following:
I would like to be able to send a request and receive an ACK within ~30 seconds, even when the total payload size is several hundreds of MB.
With my current implementation, even with chunking, I’m observing that:
I understand that
request-responseimplements a strict 1-request–1-response model and does not support streaming responses. I also understand that, for large data transfers, the usual recommendation is to use a dedicated streaming protocol.That said, I’d like to better understand what is realistically achievable within the constraints of request-response:
Are there internal behaviours (buffering, backpressure, Yamux interaction, substream lifecycle) that fundamentally limit how fast an ACK can be produced when transferring large payloads this way?
From the maintainers’ perspective:
Is there a hard architectural reason why meeting such an ACK latency constraint is fundamentally incompatible with request-response, even with careful chunking and pipelining?
I’m not opposed to moving to a streaming protocol if necessary, but before doing so I’d like to clearly understand the design boundaries of request-response and whether this use case is fundamentally misaligned with it or simply requires a different usage pattern.
Any insights, pointers, or examples would be greatly appreciated.
Thanks a lot for your time and for the work on rust-libp2p!
Beta Was this translation helpful? Give feedback.
All reactions