Checklist:
Describe the bug
After upgrading from ArgoCD v3.1.4 to v3.4.1, applications using Kustomize overlays that reference remote git repositories enter a stuck "refreshing" state in the UI and cannot display their resource tree. The application-controller continuously panics with a nil pointer dereference in
GetSyncedRefSources (util/argo/argo.go:528) whenever it attempts to process the refresh queue for affected applications.
The application's reconciliation completes successfully, but immediately after, the refresh queue processor crashes. This creates an infinite loop preventing the UI from loading the resource tree, with the ArgoCD server timing out after ~53 seconds with error: application refresh deadline exceeded.
To Reproduce
- Upgrade ArgoCD from v3.1.4 to v3.4.1
- Have an application with the following structure:
- Uses Kustomize overlay in one repo (gitops-repo)
- Kustomization references resources from another remote GitHub repo
- Has argocd-image-updater annotations enabled
- Application YAML example:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: example-app
namespace: argocd-dev
annotations:
argocd-image-updater.argoproj.io/git-repository: https://github.com/myorg/gitops-repo.git
argocd-image-updater.argoproj.io/git-branch: main
argocd-image-updater.argoproj.io/write-back-method: git
argocd-image-updater.argoproj.io/write-back-target: kustomization:/dev/overlays/example-app
argocd-image-updater.argoproj.io/image-list: example-app=123456789012.dkr.ecr.us-east-1.amazonaws.com/example-app
argocd-image-updater.argoproj.io/example-app.update-strategy: latest
argocd-image-updater.argoproj.io/example-app.force-update: "true"
spec:
project: dev
destination:
name: target-cluster
namespace: example-app-prod
sources:
- repoURL: https://github.com/myorg/gitops-repo.git
targetRevision: HEAD
ref: gitops
path: dev/overlays/example-app
kustomize: {}
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: false
ignoreDifferences:
- group: apps
kind: Deployment
name: example-app
jsonPointers:
- /spec/replicas
- Kustomization overlay at dev/overlays/example-app/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/myorg/example-app//infra/kubernetes
images:
- name: 123456789012.dkr.ecr.us-east-1.amazonaws.com/example-app
newTag: sha-abc1234
- Attempt to view the application in the ArgoCD UI
- UI shows perpetual "refreshing" spinner and cannot display pods/replicasets
- Check application-controller logs for continuous panic messages
The issue appears when:
- Kustomize overlay in gitops repo references remote Kubernetes manifests from another repo
- Application has automated sync enabled
- argocd-image-updater is configured to update the kustomization.yaml
Expected behavior
Applications should refresh normally after ArgoCD upgrade, and the UI should display the resource tree without timing out. The application-controller should not panic when processing the refresh queue for applications using Kustomize overlays that reference remote repositories.
Screenshots
N/A - UI shows spinning loader indefinitely
Version
argocd: v3.4.1+c6e2b6b.dirty
BuildDate: 2025-05-02T18:48:39Z
GitCommit: c6e2b6b89fc8edb45a7c658b5b5f59aa2bade86e
GitTreeState: dirty
GoVersion: go1.24.1
Compiler: gc
Platform: linux/amd64
argocd-server: v3.4.1+c6e2b6b.dirty
BuildDate: 2025-05-02T18:48:39Z
GitCommit: c6e2b6b89fc8edb45a7c658b5b5f59aa2bade86e
GitTreeState: dirty
GoVersion: go1.24.1
Compiler: gc
Platform: linux/amd64
Kustomize Version: v5.7.0 2025-03-04T19:36:59Z
Helm Version: v3.17.1+g76bbd2a
Kubectl Version: v0.35.3
Jsonnet Version: v0.21.0
Logs
Application-controller logs showing the recurring panic:
time="2026-05-08T18:20:48Z" level=info msg="Reconciliation completed" app-namespace=argocd-dev application=example-app comparison-level=3 comparison_with_nothing_ms=0 dest-name=target-cluster dest-namespace=example-app-prod dest-server= patch_ms=0 project=dev
refresh_app_conditions_ms=0 setop_ms=0 time_ms=15077
time="2026-05-08T18:20:48Z" level=error msg="Recovered from panic: runtime error: invalid memory address or nil pointer dereference\ngoroutine 307 [running]:\nruntime/debug.Stack()\n\t/usr/local/go/src/runtime/debug/stack.go:26
+0x64\ngithub.com/argoproj/argo-cd/v3/controller.(*ApplicationController).processAppRefreshQueueItem.func1()\n\t/go/src/github.com/argoproj/argo-cd/controller/appcontroller.go:1704 +0x7c\npanic({0x3bd0e00?, 0x8ae3750?})\n\t/usr/local/go/src/runtime/panic.go:860
+0x12c\ngithub.com/argoproj/argo-cd/v3/util/argo.GetSyncedRefSources(...)\n\t/go/src/github.com/argoproj/argo-cd/util/argo/argo.go:528\ngithub.com/argoproj/argo-cd/v3/controller.(*appStateManager).GetRepoObjs(0x1a3ad87392c0, {0x56002a0, 0x8f2bcc0}, 0x1a3ad90ed408, {0x1a3adb3d1300, 0x1,
0x1}, {0x1a3ad8bad200, 0x1b}, {0x1a3adf4acd00, ...}, ...)\n\t/go/src/github.com/argoproj/argo-cd/controller/state.go:234 +0x149c\ngithub.com/argoproj/argo-cd/v3/controller.(*appStateManager).CompareAppState(0x1a3ad87392c0, 0x1a3ad90ed408, 0x1a3ae23e5448, {0x1a3adf4acd00, 0x1, 0x1},
{0x1a3adb3d1300, 0x1, 0x1}, 0x1, ...)\n\t/go/src/github.com/argoproj/argo-cd/controller/state.go:636
+0x4834\ngithub.com/argoproj/argo-cd/v3/controller.(*ApplicationController).processAppRefreshQueueItem(0x1a3ad9106400)\n\t/go/src/github.com/argoproj/argo-cd/controller/appcontroller.go:1841 +0x1484\n..." appkey=argocd-dev/example-app
ArgoCD server logs showing timeout:
time="2026-05-08T18:11:01Z" level=error msg="finished call" grpc.code=Unknown grpc.component=server grpc.error="error getting cached app resource tree: error getting application by query: application refresh deadline exceeded" grpc.method=ResourceTree grpc.method_type=unary
grpc.service=application.ApplicationService grpc.start_time="2026-05-08T18:10:08Z" grpc.time_ms=52777.023 peer.address="127.0.0.1:46858" protocol=grpc
Additional Context
- The applications function correctly (pods are running, synced, healthy)
- Only the UI display and refresh mechanism is affected
- Restarting application-controller pods does not resolve the issue
- Disabling automated sync does not stop the panics
- The issue appears to be related to how v3.4.1 handles application state with Kustomize overlays that reference remote repositories during/after upgrades from v3.1.4
- Image: quay.io/argoproj/argocd:v3.4.1
- Kubernetes: EKS
- Multiple applications are affected with similar configuration patterns (Kustomize overlay pointing to remote repo)
Checklist:
argocd version.Describe the bug
After upgrading from ArgoCD v3.1.4 to v3.4.1, applications using Kustomize overlays that reference remote git repositories enter a stuck "refreshing" state in the UI and cannot display their resource tree. The application-controller continuously panics with a nil pointer dereference in
GetSyncedRefSources(util/argo/argo.go:528) whenever it attempts to process the refresh queue for affected applications.The application's reconciliation completes successfully, but immediately after, the refresh queue processor crashes. This creates an infinite loop preventing the UI from loading the resource tree, with the ArgoCD server timing out after ~53 seconds with error:
application refresh deadline exceeded.To Reproduce
The issue appears when:
Expected behavior
Applications should refresh normally after ArgoCD upgrade, and the UI should display the resource tree without timing out. The application-controller should not panic when processing the refresh queue for applications using Kustomize overlays that reference remote repositories.
Screenshots
N/A - UI shows spinning loader indefinitely
Version
argocd: v3.4.1+c6e2b6b.dirty
BuildDate: 2025-05-02T18:48:39Z
GitCommit: c6e2b6b89fc8edb45a7c658b5b5f59aa2bade86e
GitTreeState: dirty
GoVersion: go1.24.1
Compiler: gc
Platform: linux/amd64
argocd-server: v3.4.1+c6e2b6b.dirty
BuildDate: 2025-05-02T18:48:39Z
GitCommit: c6e2b6b89fc8edb45a7c658b5b5f59aa2bade86e
GitTreeState: dirty
GoVersion: go1.24.1
Compiler: gc
Platform: linux/amd64
Kustomize Version: v5.7.0 2025-03-04T19:36:59Z
Helm Version: v3.17.1+g76bbd2a
Kubectl Version: v0.35.3
Jsonnet Version: v0.21.0
Logs
Application-controller logs showing the recurring panic:
ArgoCD server logs showing timeout:
Additional Context