According to the http/2 spec, https://tools.ietf.org/html/rfc7540:
5.1.2. Stream Concurrency
A peer can limit the number of concurrently active streams using the
SETTINGS_MAX_CONCURRENT_STREAMS parameter
...
Endpoints MUST NOT exceed the limit set by their peer.
Also:
6.5.2. Defined SETTINGS Parameters
SETTINGS_MAX_CONCURRENT_STREAMS (0x3)
It is recommended that this value be no smaller than
100, so as to not unnecessarily limit parallelism.
The SETTINGS_MAX_CONCURRENT_STREAMS parameter, just like the other settings, can change during the duration of a connection.
Now, Apple's HTTP/2 APNS service is slightly unusual in that it sets the SETTINGS_MAX_CONCURRENT_STREAMS to 1 initially.
After an initial, successfully authenticated, request on the connection, this setting value is raised, typically to 1000.
If you perform two gun:post requests simultaneously on a newly opened connection to the Apple APNS service, the second one will fail with a stream_error. Worse, since gun does not check the stream count against the setting before opening a new stream, the second stream open attempt hits the remote peer before being rejected.
Since HTTP spec recommendation is to have a higher initial value of SETTINGS_MAX_CONCURRENT_STREAMS, this issue won't typically be seen at low levels of load. But if you use gun in a server that is under high load, it's not unthinkable that 101 processes are waiting for a connection to open to perform a request. Then this could fail also for the http2 spec recommended initial SETTINGS_MAX_CONCURRENT_STREAMS value of 100.
I have opened a proof of concept PR #245 that checks the max_concurrent_streams before opening a new stream. The PR may solve part of the problem in that we don't hit the remote peer for streams over the limit.
But also there's may a more profound issue here since the gun API design implies that you can perform requests "without worrying", while the gun process will ensure there is an open connection. But since gun internally doesn't queue up requests to wait for an available stream slot, as it does for a connection, you actually have to worry a bit.
Sample code (non including all dependencies, only for purposes of illustrating API usage):
test_push(DeviceId) ->
{ok, ConnPid} = gun:open("api.development.push.apple.com", 443),
{ok, _Protocol} = gun:await_up(ConnPid),
RequestBody = "{ \"aps\" : { \"alert\" : \"Hello\" } }",
spawn_link(fun() ->
StreamRef = gun:post(ConnPid,
[<<"/3/device/">>, DeviceId],
[
{<<"authorization">>, [<<"bearer ">>, auth_token()]},
{<<"apns-topic">>, <<"test">>},
{<<"apns-expiration">>, <<"0">>}
],
RequestBody),
case gun:await(ConnPid, StreamRef) of
{response, fin, _Status, _Headers} ->
no_data;
{response, nofin, _Status, _Headers} ->
{ok, Resp1} = gun:await_body(ConnPid, StreamRef),
io:format("~p~n", [Resp1])
end
end),
spawn_link(fun() ->
StreamRef2 = gun:post(ConnPid,
[<<"/3/device/">>, DeviceId],
[
{<<"authorization">>, [<<"bearer ">>, auth_token()]},
{<<"apns-topic">>, <<"test">>},
{<<"apns-expiration">>, <<"0">>}
],
RequestBody),
case gun:await(ConnPid, StreamRef2) of
{response, fin, _Status2, _Headers2} ->
no_data;
{response, nofin, _Status2, _Headers2} ->
Resp2 = gun:await_body(ConnPid, StreamRef2),
io:format("~p~n", [Resp2])
end
end).
Fails with:
Error in process <0.1080.0> with exit value:
{{case_clause,
{error,
{stream_error,
{stream_error,refused_stream,'Stream reset by server.'}}}},
According to the http/2 spec, https://tools.ietf.org/html/rfc7540:
Also:
The SETTINGS_MAX_CONCURRENT_STREAMS parameter, just like the other settings, can change during the duration of a connection.
Now, Apple's HTTP/2 APNS service is slightly unusual in that it sets the SETTINGS_MAX_CONCURRENT_STREAMS to 1 initially.
After an initial, successfully authenticated, request on the connection, this setting value is raised, typically to 1000.
If you perform two gun:post requests simultaneously on a newly opened connection to the Apple APNS service, the second one will fail with a stream_error. Worse, since gun does not check the stream count against the setting before opening a new stream, the second stream open attempt hits the remote peer before being rejected.
Since HTTP spec recommendation is to have a higher initial value of SETTINGS_MAX_CONCURRENT_STREAMS, this issue won't typically be seen at low levels of load. But if you use gun in a server that is under high load, it's not unthinkable that 101 processes are waiting for a connection to open to perform a request. Then this could fail also for the http2 spec recommended initial SETTINGS_MAX_CONCURRENT_STREAMS value of 100.
I have opened a proof of concept PR #245 that checks the max_concurrent_streams before opening a new stream. The PR may solve part of the problem in that we don't hit the remote peer for streams over the limit.
But also there's may a more profound issue here since the gun API design implies that you can perform requests "without worrying", while the gun process will ensure there is an open connection. But since gun internally doesn't queue up requests to wait for an available stream slot, as it does for a connection, you actually have to worry a bit.
Sample code (non including all dependencies, only for purposes of illustrating API usage):
Fails with: