Conversation
Previously when initializing the state of a binding, the CDK was directly referencing the `initial_state` of the binding and any modifications made to the binding's `state` would also be reflected in the `initial_state`. While this works when all bindings have their own distinct `initial_state`s, this would cause unintended mutations if the same `initial_state` was used for multiple bindings. Instead, the CDK now creates a deep copy of the `initial_state` when initializing a binding's `state`. This prevents mutations to `state` from affecting `initial_state`.
When a user's Stripe account has multiple thousand connected accounts, memory usage limits how many accounts from which we can concurrently capture (see 85aab13). This means any memory efficiency improvements can improve connector performance and allow us to increase the number of concurrent workers/subtasks allowed in the `PriorityQueueConfig`. This commit addresses some low-hanging fruit that have a noteable impact on the connector's memory usage. Instead of each binding creating its own `initial_state` and `all_account_ids` list, these can be created once and passed into the resource creation functions. Testing locally with ~7,000 connected accounts, I've observed a lower memory usage when all bindings are backfilling (~80% down to ~65%). I also want to see what the memory usage looks like when all bindings are backfilled & we're rapidly cycling through accounts to catch them up incrementally. I'm not confident my attempts to replicate that locally adequately reflect actual production captures, so I'd like to merge these memory improvements, observe memory usage in production, then modify settings in `priority_capture.py` to take advantage of the freshly gained memory efficiency.
652bf97 to
31c77b2
Compare
Member
Author
|
A production capture with thousands of connected accounts is currently at ~55% memory usage. It has backfilled all of those accounts and is rapidly cycling through accounts to get them caught up incrementally. After this PR is merged, I'll leave another comment with that capture's reduced memory usage. |
Member
Author
|
Following up after merging this PR, that earlier capture is now at ~33% memory usage. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description:
This PR's scope includes:
initial_statewhen initializing a binding's state inestuary-cdk. This prevents unintended mutations toinitial_stateviastate.initial_stateandall_account_idsonce insource-stripe-native. Instead of generating these potentially large objects for each binding and holding it in memory, it makes sense to only generate them once since they're calculated to the same value.See individual commits for more details.
The memory unlocked by these improvements can be used to concurrently capture from more connected accounts. But before changing those settings, I'd like to observe and note how much these improvements affect actual production captures' memory usages. After noting how much these changes gain us, I'll put up a separate PR that tweaks settings in
priority_capture.pyto concurrently capture from more accounts.Workflow steps:
(How does one use this feature, and how has it changed)
Documentation links affected:
(list any documentation links that you created, or existing ones that you've identified as needing updates, along with a brief description)
Notes for reviewers:
Tested on a local stack. With ~7,000 connected accounts all still needing to be backfilled, the connector's memory usage was ~65% (down from 80%).
This change is