Skip to content

Commit 6da485d

Browse files
authored
Merge pull request rook#16566 from obnoxxx/prep-v1.18.3
build: set release version to v1.18.3
2 parents 13ba6e9 + a3d5d27 commit 6da485d

File tree

11 files changed

+26
-26
lines changed

11 files changed

+26
-26
lines changed

Documentation/Getting-Started/quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ To configure the Ceph storage cluster, at least one of these local storage optio
3636
A simple Rook cluster is created for Kubernetes with the following `kubectl` commands and [example manifests](https://github.com/rook/rook/blob/master/deploy/examples).
3737

3838
```console
39-
$ git clone --single-branch --branch v1.18.2 https://github.com/rook/rook.git
39+
$ git clone --single-branch --branch v1.18.3 https://github.com/rook/rook.git
4040
cd rook/deploy/examples
4141
kubectl create -f crds.yaml -f common.yaml -f csi-operator.yaml -f operator.yaml
4242
kubectl create -f cluster.yaml

Documentation/Storage-Configuration/Monitoring/ceph-monitoring.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ There are two sources for metrics collection:
4848
From the root of your locally cloned Rook repo, go the monitoring directory:
4949

5050
```console
51-
$ git clone --single-branch --branch v1.18.2 https://github.com/rook/rook.git
51+
$ git clone --single-branch --branch v1.18.3 https://github.com/rook/rook.git
5252
cd rook/deploy/examples/monitoring
5353
```
5454

Documentation/Upgrade/rook-upgrade.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -95,11 +95,11 @@ With this upgrade guide, there are a few notes to consider:
9595

9696
Unless otherwise noted due to extenuating requirements, upgrades from one patch release of Rook to
9797
another are as simple as updating the common resources and the image of the Rook operator. For
98-
example, when Rook v1.18.2 is released, the process of updating from v1.18.0 is as simple as running
98+
example, when Rook v1.18.3 is released, the process of updating from v1.18.0 is as simple as running
9999
the following:
100100

101101
```console
102-
git clone --single-branch --depth=1 --branch v1.18.2 https://github.com/rook/rook.git
102+
git clone --single-branch --depth=1 --branch v1.18.3 https://github.com/rook/rook.git
103103
cd rook/deploy/examples
104104
```
105105

@@ -111,7 +111,7 @@ Then, apply the latest changes from v1.18, and update the Rook Operator image.
111111

112112
```console
113113
kubectl apply -f common.yaml -f crds.yaml -f csi-operator.yaml
114-
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.18.2
114+
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.18.3
115115
```
116116

117117
As exemplified above, it is a good practice to update Rook common resources from the example
@@ -146,7 +146,7 @@ In order to successfully upgrade a Rook cluster, the following prerequisites mus
146146
## Rook Operator Upgrade
147147

148148
The examples given in this guide upgrade a live Rook cluster running `v1.17.8` to
149-
the version `v1.18.2`. This upgrade should work from any official patch release of Rook v1.17 to any
149+
the version `v1.18.3`. This upgrade should work from any official patch release of Rook v1.17 to any
150150
official patch release of v1.18.
151151

152152
Let's get started!
@@ -173,7 +173,7 @@ by the Operator. Also update the Custom Resource Definitions (CRDs).
173173
Get the latest common resources manifests that contain the latest changes.
174174

175175
```console
176-
git clone --single-branch --depth=1 --branch v1.18.2 https://github.com/rook/rook.git
176+
git clone --single-branch --depth=1 --branch v1.18.3 https://github.com/rook/rook.git
177177
cd rook/deploy/examples
178178
```
179179

@@ -212,7 +212,7 @@ The largest portion of the upgrade is triggered when the operator's image is upd
212212
When the operator is updated, it will proceed to update all of the Ceph daemons.
213213

214214
```console
215-
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.18.2
215+
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.18.3
216216
```
217217

218218
### **3. Update Ceph CSI Custom Images **
@@ -236,16 +236,16 @@ watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster=
236236
```
237237

238238
As an example, this cluster is midway through updating the OSDs. When all deployments report `1/1/1`
239-
availability and `rook-version=v1.18.2`, the Ceph cluster's core components are fully updated.
239+
availability and `rook-version=v1.18.3`, the Ceph cluster's core components are fully updated.
240240

241241
```console
242242
Every 2.0s: kubectl -n rook-ceph get deployment -o j...
243243

244-
rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.18.2
245-
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.18.2
246-
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.18.2
247-
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.18.2
248-
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.18.2
244+
rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.18.3
245+
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.18.3
246+
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.18.3
247+
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.18.3
248+
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.18.3
249249
rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.17.8
250250
rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.17.8
251251
```
@@ -257,13 +257,13 @@ An easy check to see if the upgrade is totally finished is to check that there i
257257
# kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"rook-version="}{.metadata.labels.rook-version}{"\n"}{end}' | sort | uniq
258258
This cluster is not yet finished:
259259
rook-version=v1.17.8
260-
rook-version=v1.18.2
260+
rook-version=v1.18.3
261261
This cluster is finished:
262-
rook-version=v1.18.2
262+
rook-version=v1.18.3
263263
```
264264

265265
### **5. Verify the updated cluster**
266266

267-
At this point, the Rook operator should be running version `rook/ceph:v1.18.2`.
267+
At this point, the Rook operator should be running version `rook/ceph:v1.18.3`.
268268

269269
Verify the CephCluster health using the [health verification doc](health-verification.md).

deploy/charts/rook-ceph/values.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ image:
77
repository: docker.io/rook/ceph
88
# -- Image tag
99
# @default -- `master`
10-
tag: v1.18.2
10+
tag: v1.18.3
1111
# -- Image pull policy
1212
pullPolicy: IfNotPresent
1313

deploy/examples/direct-mount.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ spec:
1919
serviceAccountName: rook-ceph-default
2020
containers:
2121
- name: rook-direct-mount
22-
image: docker.io/rook/ceph:v1.18.2
22+
image: docker.io/rook/ceph:v1.18.3
2323
command: ["/bin/bash"]
2424
args: ["-m", "-c", "/usr/local/bin/toolbox.sh"]
2525
imagePullPolicy: IfNotPresent

deploy/examples/images.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
docker.io/rook/ceph:v1.18.2
1+
docker.io/rook/ceph:v1.18.3
22
gcr.io/k8s-staging-sig-storage/objectstorage-sidecar:v20240513-v0.1.0-35-gefb3255
33
quay.io/ceph/ceph:v19.2.3
44
quay.io/ceph/cosi:v0.1.2

deploy/examples/operator-openshift.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -691,7 +691,7 @@ spec:
691691
serviceAccountName: rook-ceph-system
692692
containers:
693693
- name: rook-ceph-operator
694-
image: docker.io/rook/ceph:v1.18.2
694+
image: docker.io/rook/ceph:v1.18.3
695695
args: ["ceph", "operator"]
696696
securityContext:
697697
runAsNonRoot: true

deploy/examples/operator.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -617,7 +617,7 @@ spec:
617617
serviceAccountName: rook-ceph-system
618618
containers:
619619
- name: rook-ceph-operator
620-
image: docker.io/rook/ceph:v1.18.2
620+
image: docker.io/rook/ceph:v1.18.3
621621
args: ["ceph", "operator"]
622622
securityContext:
623623
runAsNonRoot: true

deploy/examples/osd-purge.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ spec:
2828
serviceAccountName: rook-ceph-purge-osd
2929
containers:
3030
- name: osd-removal
31-
image: docker.io/rook/ceph:v1.18.2
31+
image: docker.io/rook/ceph:v1.18.3
3232
# TODO: Insert the OSD ID in the last parameter that is to be removed
3333
# The OSD IDs are a comma-separated list. For example: "0" or "0,2".
3434
# If you want to preserve the OSD PVCs, set `--preserve-pvc true`.

deploy/examples/toolbox-job.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ spec:
1010
spec:
1111
initContainers:
1212
- name: config-init
13-
image: docker.io/rook/ceph:v1.18.2
13+
image: docker.io/rook/ceph:v1.18.3
1414
command: ["/usr/local/bin/toolbox.sh"]
1515
args: ["--skip-watch"]
1616
imagePullPolicy: IfNotPresent
@@ -29,7 +29,7 @@ spec:
2929
mountPath: /var/lib/rook-ceph-mon
3030
containers:
3131
- name: script
32-
image: docker.io/rook/ceph:v1.18.2
32+
image: docker.io/rook/ceph:v1.18.3
3333
volumeMounts:
3434
- mountPath: /etc/ceph
3535
name: ceph-config

0 commit comments

Comments
 (0)