|
| 1 | +--- |
| 2 | +title: Custom Resource Definitions |
| 3 | +description: How to handle creating and using CRDs. |
| 4 | +sidebar_position: 7 |
| 5 | +--- |
| 6 | + |
| 7 | +This section of the Best Practices Guide deals with creating and using Custom |
| 8 | +Resource Definition objects. |
| 9 | + |
| 10 | +When working with Custom Resource Definitions (CRDs), it is important to |
| 11 | +distinguish two different pieces: |
| 12 | + |
| 13 | +- There is a declaration of a CRD. This is the YAML file that has the kind |
| 14 | + `CustomResourceDefinition` |
| 15 | +- Then there are resources that _use_ the CRD. Say a CRD defines |
| 16 | + `foo.example.com/v1`. Any resource that has `apiVersion: example.com/v1` and |
| 17 | + kind `Foo` is a resource that uses the CRD. |
| 18 | + |
| 19 | +## Install a CRD Declaration Before Using the Resource |
| 20 | + |
| 21 | +Helm is optimized to load as many resources into Kubernetes as fast as possible. |
| 22 | +By design, Kubernetes can take an entire set of manifests and bring them all |
| 23 | +online (this is called the reconciliation loop). |
| 24 | + |
| 25 | +But there's a difference with CRDs. |
| 26 | + |
| 27 | +For a CRD, the declaration must be registered before any resources of that CRDs |
| 28 | +kind(s) can be used. And the registration process sometimes takes a few seconds. |
| 29 | + |
| 30 | +### Method 1: Let `helm` Do It For You |
| 31 | + |
| 32 | +With the arrival of Helm 3, we removed the old `crd-install` hooks for a more |
| 33 | +simple methodology. There is now a special directory called `crds` that you can |
| 34 | +create in your chart to hold your CRDs. These CRDs are not templated, but will |
| 35 | +be installed by default when running a `helm install` for the chart. If the CRD |
| 36 | +already exists, it will be skipped with a warning. If you wish to skip the CRD |
| 37 | +installation step, you can pass the `--skip-crds` flag. |
| 38 | + |
| 39 | +#### Some caveats (and explanations) |
| 40 | + |
| 41 | +There is no support at this time for upgrading or deleting CRDs using Helm. This |
| 42 | +was an explicit decision after much community discussion due to the danger for |
| 43 | +unintentional data loss. Furthermore, there is currently no community consensus |
| 44 | +around how to handle CRDs and their lifecycle. As this evolves, Helm will add |
| 45 | +support for those use cases. |
| 46 | + |
| 47 | +The `--dry-run` flag of `helm install` and `helm upgrade` is not currently |
| 48 | +supported for CRDs. The purpose of "Dry Run" is to validate that the output of |
| 49 | +the chart will actually work if sent to the server. But CRDs are a modification |
| 50 | +of the server's behavior. Helm cannot install the CRD on a dry run, so the |
| 51 | +discovery client will not know about that Custom Resource (CR), and validation |
| 52 | +will fail. You can alternatively move the CRDs to their own chart or use `helm |
| 53 | +template` instead. |
| 54 | + |
| 55 | +Another important point to consider in the discussion around CRD support is how |
| 56 | +the rendering of templates is handled. One of the distinct disadvantages of the |
| 57 | +`crd-install` method used in Helm 2 was the inability to properly validate |
| 58 | +charts due to changing API availability (a CRD is actually adding another |
| 59 | +available API to your Kubernetes cluster). If a chart installed a CRD, `helm` no |
| 60 | +longer had a valid set of API versions to work against. This is also the reason |
| 61 | +behind removing templating support from CRDs. With the new `crds` method of CRD |
| 62 | +installation, we now ensure that `helm` has completely valid information about |
| 63 | +the current state of the cluster. |
| 64 | + |
| 65 | +### Method 2: Separate Charts |
| 66 | + |
| 67 | +Another way to do this is to put the CRD definition in one chart, and then put |
| 68 | +any resources that use that CRD in _another_ chart. |
| 69 | + |
| 70 | +In this method, each chart must be installed separately. However, this workflow |
| 71 | +may be more useful for cluster operators who have admin access to a cluster |
0 commit comments