You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: etcd/etcd-backup-restore/etcd-backup.adoc
+10-2
Original file line number
Diff line number
Diff line change
@@ -19,6 +19,14 @@ Back up your cluster's etcd data by performing a single invocation of the backup
19
19
20
20
After you have an etcd backup, you can xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state].
21
21
22
-
xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[Backing up etcd data]:: To back up etcd, you create an etcd snapshot and back up the resources for the static pods. You can save the backup and used it later if you need to restore etcd.
22
+
// Backing up etcd data
23
+
include::modules/backup-etcd.adoc[leveloffset=+1]
23
24
24
-
xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#creating-automated-etcd-backups_backup-etcd[Creating automated etcd backups]:: You can automate recurring and single backups. Automated backups is a Technology Preview feature.
25
+
[role="_additional-resources"]
26
+
.Additional resources
27
+
* xref:../../hosted_control_planes/hcp_high_availability/hcp-recovering-etcd-cluster.adoc#hcp-recovering-etcd-cluster[Recovering an unhealthy etcd cluster]
Copy file name to clipboardExpand all lines: etcd/etcd-backup-restore/etcd-disaster-recovery.adoc
+45-7
Original file line number
Diff line number
Diff line change
@@ -13,24 +13,62 @@ The disaster recovery documentation provides information for administrators on h
13
13
Disaster recovery requires you to have at least one healthy control plane host.
14
14
====
15
15
16
-
xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/quorum-restoration.adoc#dr-quorum-restoration[Quorum restoration]:: This solution handles situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. This solution does not require an etcd backup.
17
-
+
16
+
== Quorum restoration
17
+
18
+
You can use the `quorum-restore.sh` script to restore etcd quorum on clusters that are offline due to quorum loss. When quorum is lost, the {product-title} API becomes read-only. After quorum is restored, the {product-title} API returns to read/write mode.
19
+
20
+
// Restoring etcd quorum for high availability clusters
* xref:../../installing/installing_bare_metal/upi/installing-bare-metal.adoc#installing-bare-metal[Installing a user-provisioned cluster on bare metal]
26
+
* xref:../../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
27
+
18
28
[NOTE]
19
29
====
20
30
If you have a majority of your control plane nodes still available and have an etcd quorum, xref:../../etcd/etcd-backup-restore/replace-unhealthy-etcd-member.adoc#replace-unhealthy-etcd-member[replace a single unhealthy etcd member].
21
31
====
22
32
23
-
xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[Restoring to a previous cluster state]:: This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. If you have taken an etcd backup, you can restore your cluster to a previous state.
24
-
+
33
+
== Restoring to a previous cluster state
34
+
35
+
To restore the cluster to a previous state, you must have previously backed up the `etcd` data by creating a snapshot. You will use this snapshot to restore the cluster state. For more information, see "Backing up etcd data".
36
+
25
37
If applicable, you might also need to xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[recover from expired control plane certificates].
26
38
+
27
39
[WARNING]
28
40
====
29
41
Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort.
30
42
31
-
Before performing a restore, see xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-scenario-2-restoring-cluster-state-about_dr-restoring-cluster-state[About restoring to a previous cluster state] for more information on the impact to the cluster.
43
+
Before performing a restore, see "About restoring to a previous cluster state" for more information on the impact to the cluster.
32
44
====
33
45
34
-
xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[Recovering from expired control plane certificates]:: This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates.
* xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[Backing up etcd data]
61
+
* xref:../../installing/installing_bare_metal/upi/installing-bare-metal.adoc#installing-bare-metal[Installing a user-provisioned cluster on bare metal]
62
+
* xref:../../networking/accessing-hosts.adoc#accessing-hosts[Creating a bastion host to access {product-title} instances and the control plane nodes with SSH]
63
+
* xref:../../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/about-disaster-recovery.adoc#dr-testing-restore-procedures_about-dr[Testing restore procedures]:: Test the restore procedure to ensure that your automation and workload handle the new cluster state gracefully.
72
+
[role="_additional-resources"]
73
+
.Additional resources
74
+
* xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[Restoring to a previous cluster state]
Copy file name to clipboardExpand all lines: etcd/etcd-backup-restore/replace-unhealthy-etcd-member.adoc
+29-6
Original file line number
Diff line number
Diff line change
@@ -17,12 +17,35 @@ If the control plane certificates are not valid on the member being replaced, th
17
17
If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member.
18
18
====
19
19
20
-
xref:../../backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.adoc#restore-identify-unhealthy-etcd-member_replacing-unhealthy-etcd-member[Identifying an unhealthy etcd member]:: Identify an unhealthy etcd member in your cluster.
xref:../../backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.adoc#restore-determine-state-etcd-member_replacing-unhealthy-etcd-member[Determining the state of the unhealthy etcd member]:: Confirm why your etcd member is unhealthy by determining its state:
23
+
// Determining the state of the unhealthy etcd member
* The machine for the etcd member is not running or its node is not ready
25
-
* The etcd pod for the etcd member is crashlooping
26
-
* The machine for a bare metal etcd member is not running or its node is not ready
26
+
== Replacing the unhealthy etcd member
27
27
28
-
xref:../../backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.adoc#replacing-the-unhealthy-etcd-member[Replacing the unhealthy etcd member]:: Replace your etcd member by completing steps that are specific to the state of the etcd member.
28
+
Depending on the state of your unhealthy etcd member, use one of the following procedures:
29
+
30
+
* Replacing an unhealthy etcd member whose machine is not running or whose node is not ready
31
+
* Installing a primary control plane node on an unhealthy cluster
32
+
* Replacing an unhealthy etcd member whose etcd pod is crashlooping
33
+
* Replacing an unhealthy stopped baremetal etcd member
34
+
35
+
// Replacing an unhealthy etcd member whose machine is not running or whose node is not ready
* xref:../../machine_management/control_plane_machine_management/cpmso-troubleshooting.adoc#cpmso-ts-etcd-degraded_cpmso-troubleshooting[Recovering a degraded etcd Operator]
41
+
* link:https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2024/html/installing_openshift_container_platform_with_the_assisted_installer/expanding-the-cluster#installing-primary-control-plane-node-unhealthy-cluster_expanding-the-cluster[Installing a primary control plane node on an unhealthy cluster]
42
+
43
+
// Replacing an unhealthy etcd member whose etcd pod is crashlooping
0 commit comments