Skip to content

Memory leak from weave-gitops-mccp-cluster-service deployment in v0.28.0 #3762

Open
@naman8827

Description

Describe the bug
We're experiencing a critical memory leak issue with the weave-gitops-mccp-cluster-service deployment in our production environment. The deployment, which manages CAPI clusters, exhibits abnormal memory growth from 2GB to 15GB over approximately 2 days. When the memory usage reaches the node's capacity limit, the node transitions to a Not Ready state, causing all pods scheduled on that node to crash.

To Reproduce
Install weave-gitops-enterprise using helm chart version v0.28.0 with cluster k8s version 1.29

Actual behaviour
Memory consumption progressively increases from 2GB to 15GB within a 48-hour period, eventually causing node failure and pod crashes.

Expected behaviour
Pods should maintain stable memory usage and operate healthily without memory leaks.

Additional context
Is this a known issue in v0.28.0 that requires an upgrade to a newer version? Alternatively, would implementing memory limits in the Helm chart help mitigate node failures, even though it might result in periodic pod restarts?

  • Weave GitOps version: v0.28.0
  • Kubernetes version: 1.29

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions