You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: cerebrium/scaling/scaling-apps.mdx
+98-2Lines changed: 98 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ The **number of requests** currently waiting for processing in the queue indicat
16
16
See below for more information.
17
17
</Info>
18
18
19
-
As traffic decreases, instances enter a cooldown period after processing their last request. When no new requests arrive during cooldown, instances terminate to optimize resource usage. This automatic cycle ensures apps remain responsive while managing costs effectively.
19
+
As traffic decreases, instances enter a cooldown period at reduced concurrency. If reduced concurrency is maintained for the cooldown duration, instances scale down to optimize resource usage. This automatic cycle ensures apps remain responsive while managing costs effectively.
20
20
21
21
## Scaling Configuration
22
22
@@ -40,7 +40,7 @@ The `max_replicas` parameter sets an upper limit on concurrent instances, contro
40
40
41
41
### Cooldown Period
42
42
43
-
After processing a request, instances remain available for the duration specified by `cooldown`. Each new request resets this timer. A longer cooldown period helps handle bursty traffic patterns but increases instance running time and cost.
43
+
The `cooldown` parameter specifies the time window (in seconds) that must pass at reduced concurrency before an instance scales down. This prevents premature scale-down during brief traffic dips that might be followed by more requests. A longer cooldown period helps handle bursty traffic patterns but increases instance running time and cost.
44
44
45
45
### Replica Concurrency
46
46
@@ -184,3 +184,99 @@ Since the config has specified `100` as a target for `concurrency_utilization` a
184
184
the autoscaler will suggest a value of 1 replica for scale out. Since however, we have `scale_buffer=3`, the application will actually scale one more replica to **(1+3)=4**.
185
185
In other words, the scale buffer will simply add a static amount of replicas to the number of replicas the autoscaler suggests using the scale target.
186
186
Once this request has completed, the usual `cooldown` period will apply, and the app replica count will scale down back to the baseline of **1 replica**.
187
+
188
+
## Evaluation Interval
189
+
190
+
<Warning>Requires CLI version 2.1.5 or higher.</Warning>
191
+
192
+
The `evaluation_interval` parameter controls the time window (in seconds) over which the autoscaler evaluates metrics before making scaling decisions. The default is 30 seconds, with a valid range of 6-300 seconds.
193
+
194
+
```toml
195
+
[cerebrium.scaling]
196
+
evaluation_interval = 30# Evaluate metrics over 30-second windows
197
+
```
198
+
199
+
A shorter interval makes the autoscaler more responsive to traffic spikes but may cause more frequent scaling events. A longer interval smooths out transient spikes but may delay scaling responses.
200
+
201
+
<Info>
202
+
For bursty workloads, a shorter `evaluation_interval` (e.g., 10-15 seconds)
203
+
helps the system respond quickly to demand. For steady workloads, a longer
<Warning>Requires CLI version 2.1.5 or higher.</Warning>
210
+
211
+
The `load_balancing` parameter controls how incoming requests are distributed across your replicas. When not specified, the system automatically selects the best algorithm based on your `replica_concurrency` setting.
212
+
213
+
```toml
214
+
[cerebrium.scaling]
215
+
load_balancing = "min-connections"# Explicitly set load balancing algorithm
216
+
```
217
+
218
+
**Default behavior**: When `load_balancing` is not set, the system uses `first-available` for `replica_concurrency <= 3` (typical for GPU workloads) and `round-robin` for higher concurrency.
219
+
220
+
### Available Algorithms
221
+
222
+
#### round-robin
223
+
224
+
Cycles through replicas starting from the last successful target. Each replica's concurrency limit is respected - if a replica is at capacity, the algorithm proceeds to the next one in rotation.
| Selection complexity | O(1) typical, O(N) worst case |
241
+
| Latency profile | Optimal p50 when load is light, may degrade p90 under high load |
242
+
| Strategy | Linear scan from list start; returns first replica that accepts via Reserve() |
243
+
244
+
**Best for**: GPU workloads with low concurrency (`replica_concurrency <= 3`). Maximizes utilization of warm replicas before spreading load, reducing cold starts and keeping models in VRAM.
245
+
246
+
**Tradeoff**: Earlier replicas in the list handle more traffic. This is desirable for GPU workloads but may cause uneven distribution for CPU workloads.
247
+
248
+
#### min-connections
249
+
250
+
Linear scan to find the replica with the fewest in-flight requests, then attempts to reserve it. If that replica cannot accept (at capacity), falls back to trying other replicas in iteration order.
| Selection complexity | Θ(N) - always scans all replicas to find minimum |
255
+
| Latency profile | Best p90/p99 tail latency |
256
+
| Strategy | Single pass to find minimum in-flight; fallback in iteration order |
257
+
258
+
**Best for**: Workloads with variable request times (e.g., LLM inference where output length varies). Routes new requests to the least busy replica, preventing fast requests from queuing behind slow ones.
259
+
260
+
#### random-choice-2
261
+
262
+
Implements the "Power of Two Choices" algorithm: randomly samples two replicas and routes to the one with lower weight (based on active request tracking). Ties are broken randomly.
| Selection complexity | Θ(1) - constant time regardless of replica count |
267
+
| Latency profile | Good balance of p50 and p90 |
268
+
| Strategy | Sample 2 random replicas, compare weights, pick lighter one |
269
+
270
+
**Best for**: High-throughput scenarios with many replicas where selection overhead matters. Research shows this achieves exponentially better load distribution than pure random selection.
271
+
272
+
**Note**: Uses weight-based tracking rather than reservation-based concurrency limiting, making it suitable for unlimited concurrency scenarios.
| response_grace_period | integer | 3600 | 2.1.2+ | Grace period in seconds |
137
+
| cooldown | integer | 1800 | 2.1.2+ | Time window (seconds) that must pass at reduced concurrency before scaling down. Helps avoid cold starts from brief traffic dips. |
| scaling_target | integer | 100 | 2.1.2+ | Target value for scaling metric (percentage for utilization metrics, absolute value for requests_per_second) |
| evaluation_interval | integer | 30 | 2.1.5+ | Time window in seconds over which metrics are evaluated before scaling decisions (6-300s) |
142
+
| load_balancing | string | "" | 2.1.5+ | Algorithm for distributing traffic across replicas. Default: round-robin if replica_concurrency > 3, first-available otherwise. Options: round-robin, first-available, min-connections, random-choice-2 |
143
+
| roll_out_duration_seconds | integer | 0 | 2.1.2+ | Gradually send traffic to new revision after successful build. Max 600s. Keep at 0 during development. |
142
144
143
145
<Warning>
144
146
Setting min_replicas > 0 maintains warm instances for immediate response but
@@ -241,6 +243,8 @@ response_grace_period = 3600
241
243
cooldown = 1800
242
244
scaling_metric = "concurrency_utilization"
243
245
scaling_target = 100
246
+
evaluation_interval = 30
247
+
# load_balancing = "" # Auto-selects based on replica_concurrency
0 commit comments