You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: issues/disruption-probe.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,6 +11,8 @@ There are several examples where application owners had to build workarounds for
11
11
We are running a custom-built database that serves real-time data. On pod startup, it is assigned a shard and synchronizes data with its siblings in the background. It can be configured to serve traffic once it has reached a certain amount of data while continuing to sync the rest of the data in the background.
12
12
This means that the cluster is in a state where we need to serve traffic for stability reasons, but can't afford to lose another pod of the same shard during that time.
13
13
14
+
Readiness probes could be used by increasing the shard replica count, but it's tightly connected to the total cost of the application, which will increase significantly when running many small clusters.
15
+
14
16
### Example 2)
15
17
16
18
The Elasticsearch operator has a similar problem. Elasticsearch clusters can be in different [health states (green / yellow / red)](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html). If the cluster health is not green, it means that it could still be ready, but the system shouldn't disrupt any of the pods.
0 commit comments