Description
Most Kubernetes users are interested in configuring applications. That's the primary original purpose of Kubernetes, running containerized workloads. Even cluster services / add-ons are applications. GitOps is primarily focused on deploying applications, as well. Obviously it's the core use case for Helm.
So, what do we need to address in order to be able to handle applications in kpt?
- Create a way to pass non-KRM files through function input/output #3118
- ConfigMap generation #3119
- Develop a way to handle application configuration #3210
- Figure out how to handle secrets #3125
- Figure out how to represent per-environment customizations, for dev and prod #3280
- We need to try set-namespace for this use case
- We'll want a way to capture the Kubernetes deployment context automatically -- I've called this "mini-kubeconfig". It's also similar to the ArgoCD ApplicationSet target generator.
- We need to figure out a reasonable variant-constructor pattern for the case of creating a specific application from a generic blueprint. Cluster services don't have this issue because there's just one per cluster usually.
- Make it easier to write value transformers #3155 (comment): set-image, set-labels, etc. need to be able to specify its source locations so that ApplyReplacements isn't needed to use them.
- More generally, we need a clear model for how to pass inputs to a package, especially deployable packages. Flesh out the input data model and patterns #3396
- We may want something similar to the Flux and ArgoCD image updaters to watch a container image repo and push updates for new images. I wonder if we could use or adapt one of those existing updaters.
- New resource types in the Backstage plugin: Deployment, Service, Ingress, Gateway, GatewayClass, HTTPRoute, PersistentVolumeClaim, StatefulSet, DaemonSet, HorizontalPodAutoscaler. Support for external secrets. Prometheus Operator types. Types relevant to cluster add-ons, such as CustomResourceDefinition, ClusterRole, ClusterRoleBinding, MutatingWebhook, and ValidatingWebhook. Maybe others as we dig into some specific applications. Possibly also Istio resources and/or Argo Rollouts.
- Support for commonly used recommended labels and well known annotations, such as kubectl.kubernetes.io/default-container
- Support for prometheus annotations and OpenTelemetry environment variables
- A function that generates RBAC Roles for the resource types in a package might be useful. Maybe there's something we can use from https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern?
- Linkage between the config UI and a live-state UI, such as the Kubernetes dashboard or, ideally, a Kubernetes backstage plugin
- For cluster services / add-ons specifically, we'll likely want to use an app of apps pattern. In that case, we'd need to look at what we need to do to support RootSync and RepoSync in packages. For example, we'd probably want support for them in the backstage UI plugin. I can imagine needing a function to update pinned commits also.
- Best practices for off-the-shelf blueprints. For example, last-mile customizations that are just general Kubernetes resource attributes and not specific to blueprint components could be omitted from the blueprints. They'd be handled in general authoring logic, such as in the backstage plugin and/or functions.
- An approach for versioning off-the-shelf packages, similar to public helm charts. I insisted on sequential versioning in porch rather than semantic versioning in order to simplify the continuous deployment model. A concept of major version could be used to select the upstream blueprint revision stream. We'd eventually want to support rebase (kpt pkg rebase #2548).
- We will want to provision a namespace for the application prior to first deploying the application. That could be an interesting use case for dynamic dependencies, similar to crossplane provider package dependencies. As opposed to nested subpackages (Reconsider whether we need statically nested subpackages #3343). https://fidelity.github.io/kraan/docs/design/ supports dependencies by "layer". Flux supports package-level dependencies: https://fluxcd.io/docs/components/kustomize/kustomization/#kustomization-dependencies.
- A lifecycle annotation for enabling/disabling deployment of a resource, similar to local-config or the Config Sync manage annotation or a tombstone, but for the purpose of making blueprint resources optionally deployed, similar to the idea of disabling functions in the pipeline
- We may want to support skaffold.dev config as well.
- Document how to approach UX #3145: This is going to increase the surface area of the UI quite a bit. My guess is that some UX work will be required to make it less overwhelming.
- Lots of resource types, some of which, like pod, have lots of attributes
- Several resource cross references
- Multiple components
- Multiple deployments, such as for dev and prod
- We may want to revisit versioning for off-the-shelf packages, particularly if we're going to build in more versioning functionality, as discussed in Maintain a version field, similar to upstream info #2544.
My current opinion is that multi-cluster specialization and multi-cluster rollout is somewhat independent, but I may change that opinion as we dig into this more.
We plan to look at these common cluster services / add-ons as test cases:
- Cert Manager
- Nginx Ingress Controller
- External DNS
- Monitoring: Prometheus, AlertManager, Grafana, kube-state-metrics
- Logging: ElasticSearch, Fluentd, Kibana
We should also try deploying all our own components: porch server and controllers, config sync, resource group controller, backstage.
At some point, we should also try the ghost application (chart, rendered) we looked at previously. That involved multiple components, so that's another case for dependencies and/or app of apps or static subpackages or dynamic dependencies. It's kind of unusual in that it's an off-the-shelf app rather than a bespoke app or off-the-shelf cluster component or off-the-shelf app platform like knative, kubevela, spark, kubeflow, etc.
Once we figure out how to natively handle applications, we can look into automating helm chart import, rendering and patching helm charts, and so on.