Description
Is your feature request related to a problem? Please describe.
I noticed that the implementation of _put_object_stream_with_content_type
doing multi-part upload reads up all the chunks first (creating upload futures) and only then awaits for them to execute.
This blows up memory usage when uploading large files.
Additionally I suspect it might be the cause of frequent "Broken pipe 32" errors that I'm seeing (could it be that we initiate request to s3, but before we actually start sending data, that is after reading all chunks from disk to memory, the server closes connection out of impatience?).
Describe the solution you'd like
It should incrementally add chunks (as their content is read to memory) for multi-part upload while awaiting those already created.
This kind of queue scheduling is not the most trival with rust async, but doable e.g. with https://docs.rs/futures/latest/futures/stream/struct.FuturesUnordered.html (or possibly with multi-receiver channels like flume
)
Describe alternatives you've considered
I suppose for now the way to go is to switch to initiate_multipart_upload
and put_multipart_chunk
on the user side, re-implementing _put_object_stream_with_content_type
Additional context
related or not, the main issue I'm fighting right now are Broken pipe
and error like that:
reqwest::Error { kind: Request, url: "https://...amazonaws.com/...", source: hyper_util::client::legacy::Error(SendRequest, hyper::Error(IncompleteMessage)) }
will report back if I find something more about it, but I think it's consistent with the hypothesis that we initiate request, but not send body fast enough.