Skip to content

Commit 642f9ec

Browse files
authored
Revised ECK upgrade instructions (#6088) (#6098)
Co-authored-by: Arianna Laudazzi <[email protected]> (cherry picked from commit d8aa26b)
1 parent f6410f7 commit 642f9ec

File tree

1 file changed

+34
-23
lines changed

1 file changed

+34
-23
lines changed

docs/operating-eck/upgrading-eck.asciidoc

+34-23
Original file line numberDiff line numberDiff line change
@@ -9,28 +9,27 @@ endif::[]
99

1010
This page provides instructions on how to upgrade the ECK operator.
1111

12-
For Elastic Stack upgrade, check <<{p}-upgrading-stack,Upgrade the Elastic Stack version>>.
12+
For upgrades of Elastic Stack applications like Elasticsearch or Kibana, check <<{p}-upgrading-stack,Upgrade the Elastic Stack version>>.
1313

1414
[float]
1515
[id="{p}-ga-upgrade"]
16-
== Upgrade to ECK {eck_version}
17-
18-
ECK reached general availability (GA) status with the link:https://www.elastic.co/blog/elastic-cloud-on-kubernetes-ECK-is-now-generally-available[release of version 1.0.0]. The latest available GA version is {eck_version}. It is compatible with the previous GA releases (1.0.x and higher) and the beta release (1.0.0-beta1), and can be upgraded in-place (<<{p}-upgrade-instructions, with a few exceptions>>) by applying the new set of deployment manifests. Previous alpha releases, up to and including version 0.9.0, are not compatible with the GA and beta releases and link:https://www.elastic.co/guide/en/cloud-on-k8s/1.0-beta/k8s-upgrading-eck.html[require extra work to upgrade].
16+
== Before you upgrade to ECK {eck_version}
17+
The upgrade process results in an update to all the existing managed resources. This potentially triggers a rolling restart of all Elasticsearch and Kibana pods. This <<{p}-beta-to-ga-rolling-restart, list>> details the affected target versions that will cause a rolling restart. If you have a large Elasticsearch cluster or multiple Elastic Stack deployments, the rolling restart could cause a performance degradation. When you plan to upgrade ECK for production workloads, take into consideration the time required to upgrade the ECK operator plus the time required to roll all managed workloads and Elasticsearch clusters. Check for more information on how to <<{p}-beta-to-ga-rolling-restart, control the rolling restarts during the upgrade>>.
1918

2019
Before upgrading, refer to the <<release-notes-{eck_version}, release notes>> to make sure that the release does not contain any breaking changes that could affect you. The <<release-highlights-{eck_version},release highlights document>> provides more details and possible workarounds for any breaking changes or known issues in each release.
2120

22-
Note that the release notes and highlights only list the changes since the last release. If you are skipping over any intermediate versions during the upgrade -- such as going directly from 1.0.0-beta1 to {eck_version} -- review the release notes and highlights of each of the skipped releases to fully understand all the breaking changes you might encounter during and after the upgrade.
21+
Note that the release notes and highlights only list the changes since the last release. If during the upgrade you skip any intermediate versions and go for example from 1.0.0 directly to {eck_version}, review the release notes and highlights of each of the skipped releases to understand all the breaking changes you might encounter during and after the upgrade.
2322

2423
[float]
2524
[id="{p}-upgrade-instructions"]
2625
== Upgrade instructions
2726

28-
CAUTION: The upgrade process results in an update to all the existing managed resources. This potentially triggers a rolling restart of all Elasticsearch and Kibana pods. This <<{p}-beta-to-ga-rolling-restart, list>> details the affected target versions that will cause a rolling restart. If you have a large Elasticsearch cluster or multiple Elastic Stack deployments, the rolling restart could cause a performance degradation. When you plan to upgrade ECK for production workloads, take into consideration the time required to upgrade the ECK operator plus the time required to roll all managed workloads and Elasticsearch clusters. Furthermore, <<{p}-beta-to-ga-rolling-restart, Guidance>> is available on controlling this process more gracefully.
27+
[float]
28+
=== Upgrading from ECK 1.6 or earlier
2929

30-
Operator Lifecycle Manager (OLM) and OpenShift OperatorHub users that run with automatic upgrades enabled, are advised to set the `set-default-security-context` link:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-operator-config.html[operator flag] explicitly before upgrading to ECK 2.0. If not set ECK can fail to link:https://github.com/elastic/cloud-on-k8s/issues/5061[auto-detect] the correct security context configuration and Elasticsearch Pods may not be allowed to run.
3130

3231

33-
Release 1.7.0 moves the link:https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/[CustomResourceDefinitions] (CRD) used by ECK to the v1 version. If you upgrade from a previous version of ECK, the new version of the CRDs replaces the existing CRDs. If you cannot remove the current ECK installation because you have production workloads that must not be deleted, the following approach is recommended.
32+
Release 1.7.0 moved the link:https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/[CustomResourceDefinitions] (CRD) used by ECK to the v1 version. If you upgrade from a previous version of ECK, the new version of the CRDs replaces the existing CRDs. If you cannot remove the current ECK installation because you have production workloads that must not be deleted, the following approach is recommended.
3433

3534
[source,shell,subs="attributes,callouts"]
3635
.If you are installing using the YAML manifests: replace existing CRDs
@@ -68,21 +67,37 @@ helm upgrade elastic-operator elastic/eck-operator -n elastic-system
6867

6968
If you are using ECK through an OLM-managed distribution channel like link:https://operatorhub.io[operatorhub.io] or the OpenShift OperatorHub then the CRD version upgrade will be handled by OLM for you and you do not need to take special action.
7069

71-
This will update the ECK installation to the latest binary and update the CRDs and other ECK resources in the cluster. If you are upgrading from the beta version, ensure that your Elasticsearch, Kibana, and APM Server manifests are updated to use the `v1` API version instead of `v1beta1` after the upgrade.
70+
[float]
71+
=== Upgrading from ECK 1.9 or earlier
72+
73+
Operator Lifecycle Manager (OLM) and OpenShift OperatorHub users that run with automatic upgrades enabled, are advised to set the `set-default-security-context` link:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-operator-config.html[operator flag] explicitly before upgrading to ECK 2.0 or later. If not set, ECK can fail to link:https://github.com/elastic/cloud-on-k8s/issues/5061[auto-detect] the correct security context configuration and Elasticsearch Pods may not be allowed to run.
74+
75+
[float]
76+
=== Upgrading from ECK 2.0 or later
77+
78+
There are no special instructions to follow if you upgrade from any 2.x version to {eck_version}. Use the upgrade method applicable to your installation method of choice.
79+
80+
.If you are using our YAML manifests
81+
[source,shell,subs="attributes,callouts"]
82+
----
83+
kubectl apply -f https://download.elastic.co/downloads/eck/{eck_version}/crds.yaml
84+
kubectl apply -f https://download.elastic.co/downloads/eck/{eck_version}/operator.yaml
85+
----
86+
.If your are using Helm
87+
[source,shell,subs="attributes,callouts"]
88+
----
89+
helm upgrade elastic-operator elastic/eck-operator -n elastic-system
90+
----
91+
This will update the ECK installation to the latest binary and update the CRDs and other ECK resources in the cluster.
92+
7293

7394
[float]
7495
[id="{p}-beta-to-ga-rolling-restart"]
7596
== Control rolling restarts during the upgrade
7697

77-
Upgrading the operator results in a one-time update to existing managed resources in the cluster. This potentially triggers a rolling restart of pods by Kubernetes to apply those changes. The following table shows the target version that would cause a rolling restart.
98+
Upgrading the operator results in a one-time update to existing managed resources in the cluster. This potentially triggers a rolling restart of pods by Kubernetes to apply those changes. The following list contains the ECK operator versions that would cause a rolling restart after they have been installed.
7899

79-
* 1.6
80-
* 1.9
81-
* 2.0
82-
* 2.1
83-
* 2.2
84-
* 2.4
85-
* 2.5
100+
1.6, 1.9, 2.0, 2.1, 2.2, 2.4, 2.5
86101

87102
If you have a very large Elasticsearch cluster or multiple Elastic Stack deployments, this rolling restart might be disruptive or inconvenient. To have more control over when the pods belonging to a particular deployment should be restarted, you can <<{p}-exclude-resource,add an annotation>> to the corresponding resources to temporarily exclude them from being managed by the operator. When the time is convenient, you can remove the annotation and let the rolling restart go through.
88103

@@ -91,7 +106,7 @@ CAUTION: Once a resource is excluded from being managed by ECK, you will not be
91106
[source,shell,subs="attributes,callouts"]
92107
.Exclude Elastic resources from being managed by the operator
93108
----
94-
ANNOTATION='eck.k8s.elastic.co/managed=false' <1>
109+
ANNOTATION='eck.k8s.elastic.co/managed=false'
95110
96111
# Exclude a single Elasticsearch resource named "quickstart"
97112
kubectl annotate --overwrite elasticsearch quickstart $ANNOTATION
@@ -103,20 +118,16 @@ kubectl annotate --overwrite elastic --all $ANNOTATION
103118
for NS in $(kubectl get ns -o=custom-columns='NAME:.metadata.name' --no-headers); do kubectl annotate --overwrite elastic --all $ANNOTATION -n $NS; done
104119
----
105120

106-
<1> Before ECK 1.1.0, the annotation used to exclude resources was `common.k8s.elastic.co/pause=true`.
107-
108121
Once the operator has been upgraded and you are ready to let the resource become managed again (triggering a rolling restart of pods in the process), remove the annotation.
109122

110123

111124
[source,shell,subs="attributes,callouts"]
112125
.Resume Elastic resource management by the operator
113126
----
114-
RM_ANNOTATION='eck.k8s.elastic.co/managed-' <1>
127+
RM_ANNOTATION='eck.k8s.elastic.co/managed-'
115128
116129
# Resume management of a single Elasticsearch cluster named "quickstart"
117130
kubectl annotate elasticsearch quickstart $RM_ANNOTATION
118131
----
119132

120-
<1> Before ECK 1.1.0, the annotation used to exclude resources was `common.k8s.elastic.co/pause=true`.
121-
122133
NOTE: The ECK source repository contains a link:{eck_github}/tree/{eck_release_branch}/hack/annotator[shell script] to assist with mass addition/deletion of annotations.

0 commit comments

Comments
 (0)