-
Notifications
You must be signed in to change notification settings - Fork 727
kubeappsapi flux plugin fails to register helmrepositories #7852
Description
Describe the bug
When using helm plugin UI loads kubeapps config.
When using flux plugin UI cannot load kubeapps config.
Kubeapps kubeappsapis flux plugin reports:
Error: failed to initialize plugins server: failed to register plugins: plug-in "name:\"fluxv2.packages\" version:\"v1alpha1\"" failed to register due to: CRD [source.toolkit.fluxcd.io/v1beta2, Resource=helmrepositories] is not valid
the helmrepository resource is invalid.
To Reproduce
Steps to reproduce the behavior:
Setup:
Spin up your applications, in my use case multiple applications with boot-order dependencies are required, using flux resources and bootstrap the repos in the usual way:
flux create source git \
${application} \
--url=ssh://$(application_repo) \
--branch=${branch} \
--secret-ref=chart-auth \
--cluster=${application_cluster}" \
--timeout="30s" \
--namespace=${application_namespace}
--export > ./flux/${application}/source.yaml
flux create kustomization ${application} \
--source=${application} \
--path=./flux/${application}/ \
--prune=false \
--interval=2m \
--namespace=${customer_name} \
--wait=true \
--depends-on=${another_application} \ # only include this on one of the application charts
--target-namespace=$(application_namespace) \
--export > ./flux/${application}/${application}.yaml
flux create helmrelease ${application} \
--chart=./app-charts/${application} \
--reconcile-strategy=ChartVersion \
--target-namespace=${application_namespace} \
--source=GitRepository/${application} \
--namespace=${application_name} \
--values-from=Secret/${application}-${application_namespace} \
--export > ./flux/${application}/release.yaml
push and bootstrap this. Remember the ${another_application} must have a healthcheck endpoint.
- Run:
kubectl create ns kubeapps
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install kubeapps --namespace=kubeapps bitnami/kubeapps
kubectl create --namespace kubeapps serviceaccount kubeapps-operator
kubectl create clusterrolebinding kubeapps-operator \
--clusterrole=cluster-admin \
--serviceaccount=kubeapps:kubeapps-operator
kubectl apply --namespace kubeapps -f ./kubeapps/kubeapps-operator-token.yaml
- Port forward to the UI:
kubectl port-forward -n kubeapps svc/kubeapps 8080:80
- Goto
localhost:8080in your browser and open network tab in devtools. - See:
- Run:
helm uninstall --namespace kubeapps kubeapps
kubectl delete --namespace kubeapps serviceaccount kubeapps-operator
kubectl delete clusterrolebinding kubeapps-operator
kubectl delete ns kubeapps
- Run:
kubectl create ns kubeapps
helm install kubeapps --namespace=kubeapps bitnami/kubeapps \
--set packaging.helm.enabled=false,packaging.flux.enabled=true # \ or...
# --values values.yaml # contains the yaml version of above `set`
kubectl create --namespace kubeapps serviceaccount kubeapps-operator
kubectl create clusterrolebinding kubeapps-operator \
--clusterrole=cluster-admin \
--serviceaccount=kubeapps:kubeapps-operator
kubectl apply --namespace kubeapps -f ./kubeapps/kubeapps-operator-token.yaml
- Port forward to UI:
kubectl port-forward -n kubeapps svc/kubeapps 8080:80
- Goto
localhost:8080in your browser and open network tab in devtools. - See:
Notes
Inspection of the configmaps:
- kubeapps-frontend-config
- kubeapps-internal-kubeappsapis-configmap
- kubeapps-clusters-config
- kubeapps-internal-dashboard-config
Shows no diff.
Service resources are the same.
Logs
kc logs -n kubeapps kubeapps-internal-kubeappsapis-6465c6f67-hx696
I0618 12:13:16.886280 1 root.go:36] "The component 'kubeapps-apis' has been configured with" serverOptions={"Port":50051,"PluginDirs":["/plugins/fluxv2-packages","/plugins/resources"],"ClustersConfigPath":"/config/clusters.conf","PluginConfigPath":"/config/kubeapps-apis/plugins.conf","PinnipedProxyURL":"http://kubeapps-internal-pinniped-proxy.kubeapps:3333","PinnipedProxyCACert":"","GlobalHelmReposNamespace":"kubeapps","UnsafeLocalDevKubeconfig":false,"QPS":50,"Burst":100}
I0618 12:13:17.264302 1 main.go:23] +fluxv2 RegisterWithGRPCServer
I0618 12:13:17.264349 1 server.go:71] +fluxv2 NewServer(kubeappsCluster: [default], pluginConfigPath: [/config/kubeapps-apis/plugins.conf]
I0618 12:13:17.356769 1 utils.go:194] Redis [PING]: PONG
I0618 12:13:17.357351 1 utils.go:208] "Redis [CONFIG GET maxmemory]" maxmemory="209715200"
I0618 12:13:17.357411 1 chart_cache.go:90] +NewChartCache(chartCache, Redis<kubeapps-redis-master.kubeapps.svc.cluster.local:6379 db:0>)
I0618 12:13:17.357709 1 chart_cache.go:194] +runWorker(chartCache-worker-0)
I0618 12:13:17.357745 1 chart_cache.go:207] +processNextWorkItem(chartCache-worker-0)
I0618 12:13:17.357842 1 chart_cache.go:194] +runWorker(chartCache-worker-1)
I0618 12:13:17.357862 1 chart_cache.go:207] +processNextWorkItem(chartCache-worker-1)
I0618 12:13:17.357927 1 server.go:85] +fluxv2 using custom config: [{{3 3 3} 300 none false}]
I0618 12:13:17.358358 1 watcher_cache.go:142] +NewNamespacedResourceWatcherCache(repoCache, source.toolkit.fluxcd.io/v1beta2, Resource=helmrepositories, Redis<kubeapps-redis-master.kubeapps.svc.cluster.local:6379 db:0>)
Error: failed to initialize plugins server: failed to register plugins: plug-in "name:\"fluxv2.packages\" version:\"v1alpha1\"" failed to register due to: CRD [source.toolkit.fluxcd.io/v1beta2, Resource=helmrepositories] is not valid
Usage:
kubeapps-apis [flags]
Flags:
--clusters-config-path string Configuration for clusters
--config string config file
--global-repos-namespace string Namespace of global repositories for the helm plugin (default "kubeapps")
-h, --help help for kubeapps-apis
--kube-api-burst int set Kubernetes API client Burst limit (default 15)
--kube-api-qps float32 set Kubernetes API client QPS limit (default 10)
--pinniped-proxy-ca-cert string Path to certificate authority to use with requests to pinniped-proxy service
--pinniped-proxy-url string internal url to be used for requests to clusters configured for credential proxying via pinniped (default "http://kubeapps-internal-pinniped-proxy.kubeapps:3333")
--plugin-config-path string Configuration for plugins
--plugin-dir strings A directory to be scanned for .so plugins. May be specified multiple times. (default [.])
--port int The port on which to run this api server. Both gRPC and HTTP requests will be served on this port. (default 50051)
--unsafe-local-dev-kubeconfig if true, it will use the local kubeconfig at the KUBECONFIG env var instead of using the inCluster configuration.
-v, --version version for kubeapps-apis
Error: failed to initialize plugins server: failed to register plugins: plug-in "name:\"fluxv2.packages\" version:\"v1alpha1\"" failed to register due to: CRD [source.toolkit.fluxcd.io/v1beta2, Resource=helmrepositories] is not valid
Expected behavior
- Using flux plugin as documented does not break config load in the UI.
- Kubeapps Flux plugin can register valid flux resource
helmrepositoriesand reports invalid resources.
Desktop (please complete the following information):
- Kubeapps Chart Version: kubeapps-15.1.1
- Kubeapps version: 2.10.0
kubectl version
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.5-eks-49c6de4
- Helm version: "v3.14.4", GitCommit:"81c902a123462fd4052bc5e9aa9c513c4c8fc142"
- Go Version: 1.21.9
Additional Information
Initial deployment issue also arose when using a bad helmrelease chart. This chart deploys fine using Helm alone.
On fixing the broken helmrelease chart I tried with an app that has a long boot time:
kc logs -n flux-system helm-controller-5f7457c9dd-gmjrx|rg error | jq
{
"level": "error",
"ts": "2024-06-19T10:56:43.726Z",
"msg": "Reconciler error",
"controller": "helmrelease",
"controllerGroup": "helm.toolkit.fluxcd.io",
"controllerKind": "HelmRelease",
"HelmRelease": {
"name": "long-boot-app",
"namespace": "application"
},
"namespace": "application",
"name": "long-boot-app",
"reconcileID": "1bbd523e-5fb1-41c7-8c32-88d8c50c339b",
"error": "terminal error: exceeded maximum retries: cannot remediate failed release"
}
with the same kubeapps result.
Following flux suspend and flux resume of the long boot time application's helmrelease the state of the application in flux's helm controller is reported as in sync. Deleting the kubeappsapis container results in the same error in the kubeappsapi logs, even though the app is running and is reported as healthy in both k8s and flux.
Amending the timeout and retry count of the long boot time application results in zero errs in the flux controllers and apps that are running. The kubeappsapi reports the same error and the UI does not load.
It seems that the issue is with applications which have dependencies and are deployed in waves.
Expectation: Errors in chart or release mediation should not break kubeapps UI and should instead be reported as errors.