-
Notifications
You must be signed in to change notification settings - Fork 328
Lift the etcd limit from 8GiB to 100GiB #1071
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Lift the etcd limit from 8GiB to 100GiB #1071
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ronaldngounou The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ba37b27 to
c93d626
Compare
c93d626 to
49407a6
Compare
|
Lint issues fixed: |
|
If you're doing this refactoring, I'd like to make it clear to users that the 100GB is a recommended maximum size, and not a hard limit. This would mean different text in a couple of places. I don't know what the actual hard limit is; probably need to look at the boltDB code. |
|
Could you please suggest a wording that we should have in the meatime? |
| ## Memory | ||
|
|
||
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. | ||
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 100GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 100GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. | |
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. 100GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Within the context of this doc: etcd has a relatively small memory ...... Typically 8GB is enough.... 100GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it is makes more sense for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually have a warning at 100GB? I don't have a machine I can test that on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted this change
| ## Memory | ||
|
|
||
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. | ||
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 100GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make this a limit, not a recommendation:
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 100GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly. | |
| etcd has a relatively small memory footprint but its performance still depends on having enough memory. An etcd server will aggressively cache key-value data and spends most of the rest of its memory tracking watchers. Typically 8GB is enough. For heavy deployments with thousands of watchers and millions of keys, allocate 16GB to 64GB memory accordingly, up to a recommended maximum of 100GB. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed
|
For |
|
May I ask if there is any data in etcd that affects the cluster during data compression and fragmentation after storing 50GB of data? And how long does it take for large-scale insertion/query operations after completing the above operations |
49407a6 to
8f3c651
Compare
When etcd stores large amounts of data (like 50GB, which is quite large for etcd), several things happen: Cluster Impact: The most critical impact is on write performance - if etcd is struggling, the entire Kubernetes control plane slows down because every resource change goes through etcd. Compaction and Defragmentation: Timing for Large Operations:
If you're consistently storing 50GB in etcd, that's a red flag - you might need to rethink what you're storing there. Consider if you're inadvertently storing large ConfigMaps/Secrets or have resource leaks. |
8f3c651 to
1faefe1
Compare
As per performance improvements to etcd size limits have been evaluated to 100GB instead of 8GB.
https://www.cncf.io/blog/2019/05/09/performance-optimization-of-etcd-in-web-scale-data-scenario/
Contributes to issue #588