Replies: 2 comments 5 replies
-
|
Can you please detail the "largeness" of the flow control windows? Both the apache and jira issues refer to generic "large" amounts of data, but it's not clear how large is "large". It is true, and it is a well known issue, that HTTP/1.1 over multiple connections can be slightly better than HTTP/2 on a single connection with the same number of multiplexed streams. Having said that, a well tuned HTTP/2 system would allow sending at full speed, meaning that the server never stalls. In our experience, the server is never the problem; the problem is typically that clients are much slower to read and process the data than the server is able to send. What client are you using? The typical solution is to make sure that the client is fast at consuming data. Also of note, it is the client that must be configured with large "recv" windows, not the server, when the problem is that the server stalls the sends, like you describe in the mentioned tickets. I would suggest deriving the configuration from calculations. You must also look into network limits settings of your OS, for example in Linux Finally, you have to benchmark it, for example using It is an interesting topic, keep us posted and let us know how it goes. |
Beta Was this translation helpful? Give feedback.
-
|
For future record, here's the math to avoid server stalls. Let's assume the network bandwidth 1 GiB/s and the latency (half round-trip) is 100 micros. Assuming the client can process data at zero overhead, then:
The above means that the client window should be: The update sent by the client must be what the This means that in the example above for You may want to give a little room to the client window Note that the above works under the assumption that the client can process data at zero overhead. In conclusion, it is possible to configure HTTP/2 so that the server does not stall, once the BDP is known; and that should mimic the behavior of HTTP/1.1. If the client receive window is smaller than the calculations above, then HTTP/2 will stall a lot more than HTTP/1.1 and the download performance will be a lot worse. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am looking into this issue (https://lists.apache.org/thread/x3fv574g4y2645nxoxsc5cojorg48p24), slow performance with large HTTP/2 streams on Jetty 12.x.
Is there a known issue or tuning for such large cases (params such as initialStreamRecvWindow, initialSessionRecvWindow ) where multiple large streams are multiplexed concurrently on a single connection ?
Beta Was this translation helpful? Give feedback.
All reactions