Description
Goal: reduce the connections to the upstream servers.
Background:
For each active cluster in Envoy, Envoy maintains at least 1 connection to upstream endpoint per worker per endpoint in the cluster per connection pool setting.
Usually, connection pool setting cardinality is decided by cluster attributes. The value can be considered as constant and let's leave it aside.
For a large EDS cluster, it's a waste to maintain per worker connection to each backend endpoint. For a 16-worker Envoy process, if each worker thread only connects to the dedicated 1/16 endpoints of a 1000-endpoint cluster. We can expect the established connection drops from 16K to 1k connections in below scenarios.
- normal to high load http2 upstream, or even heavy load streaming grpc upsteam.
- light load http1 upstream
- light load tcp upstream when preconnect is enabled on cluster.
Note that N-worker Envoy doesn't need to use exactly 1/N of the endpoint per worker.
-
As a reverse proxy, Envoy usually has many replicas to reach high availability. In this situation, each worker thread could use less than 1/N endpoints and allow other Envoy replicas to balance the connections.
-
It's fine to establish a connection with more than 1/N endpoints. Still Envoy could benefits until degrade to the current full-worker-endpoint connection graph.
Pros: Save memory usage by using fewer idle connections.
Cons: The imbalance could be amplified.
Originally posted by @mattklein123 in #8702 (comment)