Skip to content

[shell-operator] chore: reduce memory consumption#871

Open
ldmonster wants to merge 1 commit intomainfrom
chore/reduce-memory-consumption
Open

[shell-operator] chore: reduce memory consumption#871
ldmonster wants to merge 1 commit intomainfrom
chore/reduce-memory-consumption

Conversation

@ldmonster
Copy link
Copy Markdown
Collaborator

Summary

This PR targets several high-frequency allocation patterns across the hook execution and event processing pipeline. Together these changes reduce GC pressure and peak memory usage, especially for hooks with large snapshots or high-churn informers.

Changes

1. Zero-alloc deep copy for jq filter results (pkg/filter/jq/apply.go)

Replaced the json.Marshal / json.Unmarshal round-trip in deepCopyAny with a recursive type-switch walker. The old approach serialized every filter result to JSON and back just to get a copy — the new implementation copies maps, slices, and primitives directly without any serialization. This also preserves original numeric types (e.g. int stays int) instead of coercing everything to float64.

2. Streaming JSON for binding context files (pkg/hook/hook.go, pkg/hook/binding_context/binding_context.go)

Added WriteJson(io.Writer) to BindingContextList and rewired prepareBindingContextJsonFile to stream JSON directly to the file via json.NewEncoder. Previously the entire binding context was marshaled into a []byte (often multi-MB for synchronization events with many objects) and then written in a second step. The streaming approach cuts peak memory by eliminating this intermediate buffer. Also switched Json() from MarshalIndent to Marshal — the indentation was never needed by hook scripts and added ~30% overhead in both CPU and memory.

3. Pooled MD5 hashers for checksum computation (pkg/utils/checksum/checksum.go)

Introduced a sync.Pool for md5.Hash instances used by CalculateChecksum. Every watch event computes at least one checksum for the filter result, so this pool eliminates one md5.New() allocation per event. Also added CalculateChecksumOfBytes([]byte) to avoid the []byte → string → []byte round-trip that occurred when checksum was computed from filter output bytes.

4. Slice reuse in proxy logger (pkg/executor/executor.go)

Changed pl.buf = []byte{} to pl.buf = pl.buf[:0] to reuse the existing backing array instead of allocating a new slice on every log line.

5. Checksum byte-path in filter (pkg/kube_events_manager/filter.go)

Switched all three CalculateChecksum(string(filteredBytes)) call sites to CalculateChecksumOfBytes(filteredBytes), eliminating a needless copy of the filter output on every watch event.

Signed-off-by: Pavel Okhlopkov <pavel.okhlopkov@flant.com>
@ldmonster ldmonster self-assigned this Apr 19, 2026
@ldmonster ldmonster added the enhancement New feature or request label Apr 19, 2026
@ldmonster ldmonster changed the title [chore] reduce memory consumption [shell-operator] chore: reduce memory consumption Apr 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant