Skip to content

Commit ad6d40a

Browse files
committed
split into separate files to avoid intimidating people with downgrade info that is irrelevant to most
1 parent 71ea49e commit ad6d40a

2 files changed

Lines changed: 235 additions & 218 deletions

File tree

Lines changed: 234 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,234 @@
1+
# CRD Rename Migration Guide
2+
3+
Starting with Chart Version v0.26.0 (App Version v1.7.0), the Temporal Worker Controller renames its two primary CRDs and one field reference:
4+
5+
| Old name | New name |
6+
|---|---|
7+
| `TemporalWorkerDeployment` | `WorkerDeployment` |
8+
| `TemporalConnection` | `Connection` |
9+
| `WorkerResourceTemplate.spec.temporalWorkerDeploymentRef` | `WorkerResourceTemplate.spec.workerDeploymentRef` |
10+
11+
The upgrade path is straightforward. See the [upgrade guide](migration-crd-rename.md) for more details.
12+
13+
## Downgrading from v1.7 to v1.6
14+
15+
There are some important things to consider if you want to roll back
16+
(downgrade) the installed version of Temporal Worker Controller after upgrading to v1.7.0.
17+
18+
> **Warning**: You **should not perform a rollback/downgrade of the Temporal
19+
> Worker Controller CRDs Helm Chart**. Doing so is a potentially
20+
> **destructive** operation that can cause your Temporal Worker Deployments to
21+
> be deleted.
22+
>
23+
> See [here][crd-pruning] for more details.
24+
25+
[crd-pruning]: https://github.com/temporalio/temporal-worker-controller/blob/main/docs/crd-management.md#crd-rollback-and-field-pruning
26+
27+
To downgrade the Temporal Worker Controller itself, do:
28+
29+
```bash
30+
helm rollback <RELEASE_NAME> <REVISION_NUMBER>
31+
```
32+
33+
Where `<RELEASE_NAME>` is the Helm Release associated with the Temporal Worker
34+
Controller Helm Chart (**not** the CRDs Chart) and `<REVISION_NUMBER>` is the
35+
Helm release revision number to roll back to. You can get this information by
36+
doing:
37+
38+
```bash
39+
helm history -n <TWC_NAMESPACE> <TWC_RELEASE_NAME>
40+
```
41+
42+
Where `<TWC_NAMESPACE>` is the Kubernetes Namespace you installed Temporal
43+
Worker Controller in and `<TWC_RELEASE>` is the name of the Helm Release
44+
associated with the Temporal Worker Controller Helm Chart.
45+
46+
Once you have downgraded the Temporal Worker Controller, you will need to take
47+
some corrective actions depending on how far down the migration path you went
48+
when upgrading to the v1.7 Temporal Worker Controller release.
49+
50+
---
51+
If you upgraded the Temporal Worker Controller to v1.7 -- i.e. you successfully
52+
completed Step 2 above -- but **did not** complete Step 3 (migrating your
53+
resources), execute the following `kubectl` command to remove the CRD rename
54+
validation guard on the old `TemporalWorkerDeployment` and `TemporalConnection`
55+
Custom Resource Definitions:
56+
57+
```bash
58+
kubectl patch crd temporalworkerdeployments.temporal.io --type='json' -p='[{"op": "remove", "path": "/spec/versions/0/schema/openAPIV3Schema/properties/spec/x-kubernetes-validations"}]'
59+
kubectl patch crd temporalconnections.temporal.io --type='json' -p='[{"op": "remove", "path": "/spec/versions/0/schema/openAPIV3Schema/properties/spec/x-kubernetes-validations"}]'
60+
```
61+
You will also need to manually remove the `migration-guard` finalizer that was added
62+
to your `TemporalWorkerDeployment` and `TemporalConnection` resources by the 1.7 controller:
63+
64+
65+
Get a list of all the original `TemporalWorkerDeployment` object names and UIDs:
66+
67+
```bash
68+
kubectl get -n <NAMESPACE> temporalworkerdeployments.temporal.io -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.uid}{"\n"}{end}'
69+
```
70+
71+
For each of the `TemporalWorkerDeployments` listed above:
72+
73+
```bash
74+
kubectl patch -n <NAMESPACE> temporalworkerdeployments/<TWD_NAME> --type=merge -p='{"metadata":{"finalizers":["temporal.io/delete-protection"]}}'
75+
```
76+
77+
Get a list of all the original `TemporalConnection` object names and UIDs:
78+
79+
```bash
80+
kubectl get -n <NAMESPACE> temporalworkerdeployments.temporal.io -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.uid}{"\n"}{end}'
81+
```
82+
83+
For each of the `TemporalConnections` listed above:
84+
85+
```bash
86+
kubectl patch -n <NAMESPACE> temporalconnections/<TC_NAME> --type=merge -p='{"metadata":{"finalizers":[]}}'
87+
```
88+
89+
---
90+
91+
If you upgraded the Temporal Worker Controller to v1.7 and completed Step 3
92+
above (i.e. you successfully migrated your resources), you will need to
93+
manually restore the OwnerReferences for your Kubernetes Deployments to point
94+
at the original `TemporalWorkerDeployment` resources.
95+
96+
To do so, first, get a list of all the original `TemporalWorkerDeployment`
97+
object names and UIDs:
98+
99+
```bash
100+
kubectl get -n <NAMESPACE> temporalworkerdeployments.temporal.io -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.uid}{"\n"}{end}'
101+
```
102+
103+
Then get a list of all the Kubernetes `Deployments` that are now owned by the new
104+
`WorkerDeployment` resources:
105+
106+
```bash
107+
kubectl get deployments -n <NAMESPACE> -o json | jq -r '
108+
.items[] | select(
109+
.metadata.ownerReferences // [] | any(.kind == "WorkerDeployment")
110+
) | .metadata.name
111+
'
112+
```
113+
114+
Then, for each of the Kubernetes Deployments listed above, execute the
115+
following `kubectl` command to reset the OwnerReferences of Kubernetes
116+
Deployments back to the original `TemporalWorkerDeployment` custom resources:
117+
118+
```bash
119+
kubectl patch -n <NAMESPACE> deployment <DEPLOYMENT_NAME> --type='merge' -p '
120+
{
121+
"metadata": {
122+
"ownerReferences": [
123+
{
124+
"apiVersion": "temporal.io/v1alpha1",
125+
"kind": "TemporalWorkerDeployment",
126+
"name": "<TWD_NAME>",
127+
"uid": "<TWD_UID>",
128+
"controller": true,
129+
"blockOwnerDeletion": true
130+
}
131+
]
132+
}
133+
}'
134+
```
135+
136+
Replace `<TWD_NAME>` and `<TWD_UID>` with the correct
137+
`TemporalWorkerDeployment` custom resource's name and UID you printed out
138+
earlier. It's important that the UID string is correct, because if Kubernetes GC
139+
does not recognize the UID, it will treat those `Deployments` as
140+
orphaned and delete them.
141+
142+
Confirm that your `Deployments` are now owned by the original `TemporalWorkerDeployment` resources:
143+
```bash
144+
kubectl get deployments -n <NAMESPACE> -o json | jq -r '
145+
.items[] | select(
146+
.metadata.ownerReferences // [] | any(.kind == "TemporalWorkerDeployment")
147+
) | .metadata.name
148+
'
149+
```
150+
151+
If you completed Step 4 above and modified `WorkerResourceTemplate` resources,
152+
you will also need to reset the `OwnerReferences` for those resources as well.
153+
154+
```bash
155+
kubectl get workerresourcetemplates -n <NAMESPACE> -o json | jq -r '
156+
.items[] | select(
157+
.metadata.ownerReferences // [] | any(.kind == "WorkerDeployment")
158+
) | .metadata.name
159+
'
160+
```
161+
162+
Then, for each of the `WorkerResourceTemplate` resources listed above, execute
163+
the following `kubectl` command to reset the OwnerReferences of Kubernetes
164+
Deployments back to the original `TemporalWorkerDeployment` custom resources:
165+
166+
```bash
167+
kubectl patch -n <NAMESPACE> wrt <WRT_NAME> --type='merge' -p '
168+
{
169+
"metadata": {
170+
"ownerReferences": [
171+
{
172+
"apiVersion": "temporal.io/v1alpha1",
173+
"kind": "TemporalWorkerDeployment",
174+
"name": "<TWD_NAME>",
175+
"uid": "<TWD_UID>",
176+
"controller": true,
177+
"blockOwnerDeletion": true
178+
}
179+
]
180+
}
181+
}'
182+
```
183+
184+
Again, replace `<TWD_NAME>` and `<TWD_UID>` with the correct
185+
`TemporalWorkerDeployment` custom resource's name and UID you printed out
186+
earlier. It's important that the UID string is correct, because if Kubernetes GC
187+
does not recognize the UID, it will treat those `WorkerResourceTemplates` as
188+
orphaned and delete them.
189+
190+
Confirm that your `WorkerResourceTemplates` are now owned by the original `TemporalWorkerDeployment` resources:
191+
```bash
192+
kubectl get workerresourcetemplates -n <NAMESPACE> -o json | jq -r '
193+
.items[] | select(
194+
.metadata.ownerReferences // [] | any(.kind == "TemporalWorkerDeployment")
195+
) | .metadata.name
196+
'
197+
```
198+
199+
Now you can safely delete the `WorkerDeployment` and `Connection` resources without
200+
deleting any `Deployments` or `WorkerResourceTemplates`. Before deleting the `WorkerDeployment`
201+
resource, you will need to remove the `deletion-protection` finalizer that the v1.7 controller
202+
added to it:
203+
204+
```bash
205+
kubectl patch -n <NAMESPACE> workerdeployments/<WD_NAME> --type=merge -p='{"metadata":{"finalizers":[]}}'
206+
```
207+
208+
You'll notice that because you did not roll back the CRD chart, there is still a
209+
deprecation warning on the `TemporalWorkerDeployment` and `TemporalConnection` resource.
210+
This can be safely ignored. If you have already safely migrated ownership away from all
211+
`WorkerDeployment` resources, you could also roll back the CRD chart to v0.25.0. Rolling
212+
the CRDs back earlier is very risky, because any `Deployments` and `WorkerResourceTemplates`
213+
owned by the `WorkerDeployment` resources will be deleted when the `WorkerDeployment` resources
214+
are deleted, and rolling back the CRDs will delete all `WorkerDeployment` and `Connection` instances.
215+
216+
To recap, here is how to confirm that no `WorkerDeployment` owns any `Deployments` or `WorkerResourceTemplates` in any namespace:
217+
```bash
218+
kubectl get deployments -A -o json | jq -r '
219+
.items[] | select(
220+
.metadata.ownerReferences // [] | any(.kind == "WorkerDeployment")
221+
) | .metadata.name
222+
'
223+
kubectl get workerresourcetemplates -A -o json | jq -r '
224+
.items[] | select(
225+
.metadata.ownerReferences // [] | any(.kind == "WorkerDeployment")
226+
) | .metadata.name
227+
'
228+
```
229+
230+
and here is how to confirm you no longer have any `WorkerDeployment` or `Connection` in any namespace:
231+
```bash
232+
kubectl get workerdeployments -A
233+
kubectl get connections -A
234+
```

0 commit comments

Comments
 (0)