Client struggles with 400+ concurrent ordered consumers #1981
-
|
Hello, we have noticed a limitation with Go client library with number amount of concurrent ordered consumers. In short, our use-case is to spin up around 500 ordered ephermal consumers and stream them via SSE, which takes around 200 microseconds. What we have noticed is that the NATS connection becomes very unstable with such amount and we stop receiving messages. We get logs like With up to 100 ordered consumers it works fine. I have made a gist that recreates the situation https://gist.github.com/ramasauskas/06c5d36e0cc0437eebb6122d1f19554d. Is there some sort of parameter which we could tweak so that it could handle it better? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
|
Your client is becoming a slow consumers so the connection keeps getting reset. By default the server will allocate up to 64 MB of pending data to be flushed to a client but the collection of ordered consumers are instantly asking for much more than that so the server will close the connection since going over the quota. |
Beta Was this translation helpful? Give feedback.
-
|
The argumentation makes sense, @wallyqs. Thank you for the quick reply! However, could you clarify that we need to tweak the srv, err := server.NewServer(&server.Options{
Port: 4221,
Authorization: "token",
JetStream: true,
StoreDir: "./data",
Debug: true,
MaxPending: 536870912, // <-- here?
})
if err != nil {
return fmt.Errorf("creating server: %w", err)
}So that is, the server can allocate up to 512MB of storage per each connection? |
Beta Was this translation helpful? Give feedback.
-
|
Your application could also use multiple connections and spread out the ordered consumers across them to avoid any one becoming a slow consumer. |
Beta Was this translation helpful? Give feedback.
Your client is becoming a slow consumers so the connection keeps getting reset. By default the server will allocate up to 64 MB of pending data to be flushed to a client but the collection of ordered consumers are instantly asking for much more than that so the server will close the connection since going over the quota.
It is possible to tweak this by changing the default from
max_pendingto something larger, for example I setmax_pending = 512miband your example now runs fine on my setup. Changing this default means that you would be allowing a single connection to hold up to 512MB, so if having multiple connections need to think about the worst case of multiple connections doing the s…