-
Notifications
You must be signed in to change notification settings - Fork 16
Open
Description
The release pipeline on the gitops cluster repo (bootjob) has a few issues
- it does not truly run in parallel. as the number of namespaces and installed charts grows, the pipeline duration gets exponentially longer
- a single kubectl apply failure will stop the entire pipeline before applying all valid k8s manifests
- failure notifications can be sent via slack, but nobody is pinged directly (at least i can't get the directmessage settings to work)
- the makefile that runs during the release pipeline is spaghetti and not very flexible
We run ArgoCD to sync the our non-jenkins-x config repo to our preprod environment and it is very intuitive and flexible. There is a proposal in the kubernetes slack #jenkins-x-dev channel to replace the above process with Argo CD
The most basic POC could look something like
- install the ArgoCD helm chart via terraform with lifecycle configuration to ignore all future changes. normally, we would manage helm releases via helmfile, but because we need to bootstrap the cluster and run a first ArgoCD sync, we can use the terraform helm provider
resource "helm_release" "argocd_bootstrap" { chart = "argo-cd" create_namespace = true namespace = var.namespace name = "argocd" version = "5.5.7" repository = "https://argoproj.github.io/argo-helm" values = [ jsonencode( { "controller" : { "serviceAccount" : { "annotations" : { "iam.gke.io/gcp-service-account" : "argocd-${var.cluster_name}@${var.gcp_project}.iam.gserviceaccount.com" } }, }, "repoServer" : { "autoscaling" : { "enabled" : true, "minReplicas" : 2 }, "initContainers" : [ { "name" : "download-tools", "image" : "ghcr.io/helmfile/helmfile:v0.147.0", "command" : [ "sh", "-c" ], "args" : [ "wget -qO /custom-tools/argo-cd-helmfile.sh https://raw.githubusercontent.com/travisghansen/argo-cd-helmfile/master/src/argo-cd-helmfile.sh && chmod +x /custom-tools/argo-cd-helmfile.sh && mv /usr/local/bin/helmfile /custom-tools/helmfile" ], "volumeMounts" : [ { "mountPath" : "/custom-tools", "name" : "custom-tools" } ] } ], "serviceAccount" : { "annotations" : { "iam.gke.io/gcp-service-account" : "argocd-${var.cluster_name}@${var.gcp_project}.iam.gserviceaccount.com" } }, "volumes" : [ { "name" : "custom-tools", "emptyDir" : {} } ], "volumeMounts" : [ { "mountPath" : "/usr/local/bin/argo-cd-helmfile.sh", "name" : "custom-tools", "subPath" : "argo-cd-helmfile.sh" }, { "mountPath" : "/usr/local/bin/helmfile", "name" : "custom-tools", "subPath" : "helmfile" } ] }, "server" : { "autoscaling" : { "enabled" : true, "minReplicas" : 2 } "ingress" : { "enabled" : true, "annotations" : { "nginx.ingress.kubernetes.io/backend-protocol" : "HTTPS", "nginx.ingress.kubernetes.io/force-ssl-redirect" : "true", "nginx.ingress.kubernetes.io/ssl-passthrough" : "true" }, "hosts" : [ "argocd.${var.apex_domain}" ], "serviceAccount" : { "annotations" : { "iam.gke.io/gcp-service-account" : "argocd-${var.cluster_name}@${var.gcp_project}.iam.gserviceaccount.com" } } } } } ) ] set { name = "server.config.configManagementPlugins" value = <<-EOT - name: helmfile init: # Optional command to initialize application source directory command: ["argo-cd-helmfile.sh"] args: ["init"] generate: # Command to generate manifests YAML command: ["argo-cd-helmfile.sh"] args: ["generate"] EOT } set { name = "configs.credentialTemplates.https-creds.url" value = regex("\\w+://\\w+\\.\\w+", var.jx_git_url) } set_sensitive { name = "configs.credentialTemplates.https-creds.username" value = var.jx_bot_username } set_sensitive { name = "configs.credentialTemplates.https-creds.password" value = var.jx_bot_token } dynamic "set" { for_each = var.helm_settings content { name = set.key value = set.value } lifecycle { ignore_changes = all } }
- use terraform to configure ArgoCD to sync the config-root folder of the dev gitops repo to the dev gke cluster. maybe we can package this as a separate helm chart called
argo-cd-appsor something- apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: dev spec: generators: - git: repoURL: https://github.com/{{.Values.jxRequirements.cluster.environmentGitOwner}}/{{.Values.jxRequirements.environments.0.repository}} revision: HEAD directories: - path: helmfiles/* # - path: config-root/customresourcedefinitions # - path: config-root/namespaces/* template: metadata: name: '{{path.basename}}' spec: project: default source: repoURL: https://github.com/{{.Values.jxRequirements.cluster.environmentGitOwner}}/{{.Values.jxRequirements.environments.0.repository}} targetRevision: HEAD path: '{{path}}' plugin: env: - name: HELMFILE_USE_CONTEXT_NAMESPACE value: "true" - name: HELM_TEMPLATE_OPTIONS value: --skip-tests destination: server: https://kubernetes.default.svc namespace: '{{path.basename}}' syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true
- We should probably figure out whether we want to continue rendering kubernetes templates and committing them back to the cluster repo at PR time or if we should just use argo to sync directly from the helmfile. There's an example bot and a github action that post the output of
helmfile difforargo diffas a PR comment
- We should probably figure out whether we want to continue rendering kubernetes templates and committing them back to the cluster repo at PR time or if we should just use argo to sync directly from the helmfile. There's an example bot and a github action that post the output of
- After the first sync, ArgoCD manages its own helm chart installation
i have some rough ideas here:
jenkins-x/terraform-google-jx#228
https://github.com/joshuasimon-taulia/helm-argo-cd-apps/commit/4572c611887190bc4f9640e177a8e902ff7b6558
this is what the demo appset generates in my atlantis project

when you click on the "namespace" application, the ui drills down into your actual k8s objects

blackandred, keskad and brandongottesman-tauliakeskad
Metadata
Metadata
Assignees
Labels
No labels