You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: backup_and_restore/application_backup_and_restore/oadp-intro.adoc
+2-2
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ toc::[]
13
13
The {oadp-first} product safeguards customer applications on {product-title}. It offers comprehensive disaster recovery protection, covering {product-title} applications, application-related cluster resources, persistent volumes, and internal images. {oadp-short} is also capable of backing up both containerized applications and virtual machines (VMs).
14
14
15
15
ifndef::openshift-rosa,openshift-rosa-hcp[]
16
-
However, {oadp-short} does not serve as a disaster recovery solution for xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd] or {OCP-short} Operators.
16
+
However, {oadp-short} does not serve as a disaster recovery solution for xref:../../etcd/etcd-backup.adoc#backing-up-etcd-data_etcd-backup[etcd] or {OCP-short} Operators.
Copy file name to clipboardExpand all lines: backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc
* xref:../../../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
Copy file name to clipboardExpand all lines: backup_and_restore/graceful-cluster-restart.adoc
+2-2
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ Even though the cluster is expected to be functional after the restart, the clus
14
14
* Node failure due to hardware
15
15
* Network connectivity issues
16
16
17
-
If your cluster fails to recover, follow the steps to xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state].
17
+
If your cluster fails to recover, follow the steps to xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[restore to a previous cluster state].
* See xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[Restoring to a previous cluster state] for how to use an etcd backup to restore if your cluster failed to recover after restarting.
29
+
* See xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[Restoring to a previous cluster state] for how to use an etcd backup to restore if your cluster failed to recover after restarting.
Copy file name to clipboardExpand all lines: backup_and_restore/graceful-cluster-shutdown.adoc
+2-2
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ This document describes the process to gracefully shut down your cluster. You mi
10
10
11
11
== Prerequisites
12
12
13
-
* Take an xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[etcd backup] prior to shutting down the cluster.
13
+
* Take an xref:../etcd/etcd-backup.adoc#backing-up-etcd-data_etcd-backup[etcd backup] prior to shutting down the cluster.
14
14
+
15
15
[IMPORTANT]
16
16
====
@@ -22,7 +22,7 @@ For example, the following conditions can cause the restarted cluster to malfunc
22
22
* Node failure due to hardware
23
23
* Network connectivity issues
24
24
25
-
If your cluster fails to recover, follow the steps to xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state].
25
+
If your cluster fails to recover, follow the steps to xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[restore to a previous cluster state].
* Take an xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[etcd backup] prior to hibernating the cluster.
17
+
* Take an xref:../etcd/etcd-backup.adoc#backing-up-etcd-data_etcd-backup[etcd backup] prior to hibernating the cluster.
18
18
+
19
19
[IMPORTANT]
20
20
====
@@ -26,7 +26,7 @@ For example, the following conditions can cause the resumed cluster to malfuncti
26
26
* Node failure due to hardware
27
27
* Network connectivity issues
28
28
29
-
If your cluster fails to recover, follow the steps to xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore to a previous cluster state].
29
+
If your cluster fails to recover, follow the steps to xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[restore to a previous cluster state].
Copy file name to clipboardExpand all lines: backup_and_restore/index.adoc
+5-4
Original file line number
Diff line number
Diff line change
@@ -12,25 +12,26 @@ toc::[]
12
12
13
13
As a cluster administrator, you might need to stop an {product-title} cluster for a period and restart it later. Some reasons for restarting a cluster are that you need to perform maintenance on a cluster or want to reduce resource costs. In {product-title}, you can perform a xref:../backup_and_restore/graceful-cluster-shutdown.adoc#graceful-shutdown-cluster[graceful shutdown of a cluster] so that you can easily restart the cluster later.
14
14
15
-
You must xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[back up etcd data] before shutting down a cluster; etcd is the key-value store for {product-title}, which persists the state of all resource objects. An etcd backup plays a crucial role in disaster recovery. In {product-title}, you can also xref:../backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.adoc#replacing-unhealthy-etcd-member[replace an unhealthy etcd member].
15
+
You must xref:../etcd/etcd-backup.adoc#backup-etcd_etcd-backup[back up etcd data] before shutting down a cluster; etcd is the key-value store for {product-title}, which persists the state of all resource objects. An etcd backup plays a crucial role in disaster recovery. In {product-title}, you can also xref:../etcd/etcd-backup.adoc#replace-unhealthy-etcd-member_etcd-backup[replace an unhealthy etcd member].
16
16
17
17
When you want to get your cluster running again, xref:../backup_and_restore/graceful-cluster-restart.adoc#graceful-restart-cluster[restart the cluster gracefully].
18
18
19
19
[NOTE]
20
20
====
21
-
A cluster's certificates expire one year after the installation date. You can shut down a cluster and expect it to restart gracefully while the certificates are still valid. Although the cluster automatically retrieves the expired control plane certificates, you must still xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[approve the certificate signing requests (CSRs)].
21
+
A cluster's certificates expire one year after the installation date. You can shut down a cluster and expect it to restart gracefully while the certificates are still valid. Although the cluster automatically retrieves the expired control plane certificates, you must still xref:../etcd/etcd-backup.adoc#dr-scenario-3-recovering-expired-certs_etcd-backup[approve the certificate signing requests (CSRs)].
22
22
====
23
23
24
-
You might run into several situations where {product-title} does not work as expected, such as:
24
+
You might run into several situations where {product-title} does not work as expected, such as:
25
25
26
26
* You have a cluster that is not functional after the restart because of unexpected conditions, such as node failure or network connectivity issues.
27
27
* You have deleted something critical in the cluster by mistake.
28
28
* You have lost the majority of your control plane hosts, leading to etcd quorum loss.
29
29
30
-
You can always recover from a disaster situation by xref:../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restoring your cluster to its previous state] using the saved etcd snapshots.
30
+
You can always recover from a disaster situation by xref:../etcd/etcd-backup.adoc#dr-scenario-2-restoring-cluster-state-about_etcd-backup[restoring your cluster to its previous state] using the saved etcd snapshots.
31
31
32
32
[role="_additional-resources"]
33
33
.Additional resources
34
+
* xref:../etcd/etcd-backup.adoc#etcd-backup[Backing up and restoring etcd data]
34
35
* xref:../machine_management/deleting-machine.adoc#machine-lifecycle-hook-deletion-etcd_deleting-machine[Quorum protection with machine lifecycle hooks]
Copy file name to clipboardExpand all lines: disconnected/updating/disconnected-update.adoc
+1-1
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ Use the following procedures to update a cluster in a disconnected environment w
20
20
* You must provision a local container image registry with the container images for your update, as described in xref:../../disconnected/updating/mirroring-image-repository.adoc#mirroring-ocp-image-repository[Mirroring {product-title} images].
21
21
* You must have access to the cluster as a user with `admin` privileges.
22
22
See xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions].
23
-
* You must have a recent xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backup-etcd[etcd backup] in case your update fails and you must xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.adoc#dr-restoring-cluster-state[restore your cluster to a previous state].
23
+
* You must have a recent xref:../../etcd/etcd-backup.adoc#backing-up-etcd-data_etcd-backup[etcd backup] in case your update fails and you must xref:../../etcd/etcd-backup.adoc#dr-restoring-cluster-state_etcd-backup[restore your cluster to a previous state].
24
24
* You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators] for more information on how to check compatibility and, if necessary, update the installed Operators.
25
25
* You must ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.
26
26
* If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see xref:../../updating/preparing_for_updates/preparing-manual-creds-update.adoc#preparing-manual-creds-update[Preparing to update a cluster with manually maintained credentials].
* xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-scenario-3-recovering-expired-certs_dr-recovering-expired-certs[Recovering from expired control plane certificates]
37
+
* xref:../../etcd/etcd-backup.adoc#dr-scenario-3-recovering-expired-certs_etcd-backup[Recovering from expired control plane certificates]
* xref:../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-scenario-3-recovering-expired-certs_dr-recovering-expired-certs[Recovering from expired control plane certificates]
56
+
* xref:../../etcd/etcd-backup.adoc#dr-scenario-3-recovering-expired-certs_etcd-backup[Recovering from expired control plane certificates]
* xref:../installing/installing_bare_metal/upi/installing-bare-metal.adoc[Installing a user-provisioned cluster on bare metal]
123
123
124
-
//* xref:../installing/installing_bare_metal/ipi/ipi-install-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_ipi-install-expanding[Replacing a bare-metal control plane node]
124
+
* xref:../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
* xref:../etcd/etcd-backup.adoc#backing-up-etcd-data_etcd-backup[Backing up etcd data]
141
141
* xref:../installing/installing_bare_metal/upi/installing-bare-metal.adoc[Installing a user-provisioned cluster on bare metal]
142
142
* xref:../networking/accessing-hosts.adoc#accessing-hosts[Accessing hosts on Amazon Web Services in an installer-provisioned infrastructure cluster]
143
-
//* xref:../installing/installing_bare_metal/ipi/ipi-install-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_ipi-install-expanding[Replacing a bare-metal control plane node]
143
+
* xref:../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
144
144
145
145
// Issues and workarounds for restoring a persistent storage state
== Backing up and restoring etcd on a hosted cluster on {aws-short}
21
21
22
-
If you use {hcp} for {product-title}, the process to back up and restore etcd is different from xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[the usual etcd backup process].
22
+
If you use {hcp} for {product-title}, the process to back up and restore etcd is different from xref:../etcd/etcd-backup.adoc#backup-etcd_etcd-backup[the usual etcd backup process].
23
23
24
24
The following procedures are specific to {hcp} on {aws-short}.
* xref:../../backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.adoc#replacing-the-unhealthy-etcd-member[Replacing an unhealthy etcd member]
29
+
* xref:../../etcd/etcd-backup.adoc#replace-unhealthy-etcd-member_etcd-backup[Replacing an unhealthy etcd member]
30
30
31
-
* xref:../../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[Backing up etcd]
31
+
* xref:../../etcd/etcd-backup.adoc#backup-etcd_etcd-backup[Backing up and restoring etcd data]
32
32
33
33
* xref:../../installing/installing_bare_metal/bare-metal-postinstallation-configuration.adoc#bmo-config-using-bare-metal-operator_bare-metal-postinstallation-configuration[Configuration using the Bare Metal Operator]
* See xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[Recovering from expired control plane certificates] for more information about recovering kubelet certificates.
153
+
* See xref:../../../etcd/etcd-backup.adoc#dr-scenario-3-recovering-expired-certs_etcd-backup[Recovering from expired control plane certificates] for more information about recovering kubelet certificates.
* See xref:../../../backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.adoc#dr-recovering-expired-certs[Recovering from expired control plane certificates] for more information about recovering kubelet certificates.
145
+
* See xref:../../../etcd/etcd-backup.adoc#dr-scenario-3-recovering-expired-certs_etcd-backup[Recovering from expired control plane certificates] for more information about recovering kubelet certificates.
0 commit comments