Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -412,9 +412,9 @@ You can also perform the following modifications:

You can execute DDL operations after replication has been already been configured. Depending on the type of DDL operations, additional considerations are required.

### Adding new objects (Tables, Partitions, Indexes)
### Add new objects (Tables, Partitions, Indexes)

#### Adding tables (or partitions)
#### Add tables (or partitions)

When new tables (or partitions) are created, to ensure that all changes from the time of object creation are replicated, writes should start on the new objects only after they are added to replication. If tables (or partitions) already have existing data before they are added to replication, then follow the bootstrap process described in [Bootstrap a target universe](#bootstrap-a-target-universe).

Expand Down Expand Up @@ -466,9 +466,9 @@ When new tables (or partitions) are created, to ensure that all changes from the
Replication altered successfully
```

#### Adding indexes in unidirectional replication
#### Add indexes in unidirectional replication

To add a new index to an empty table, follow the same steps as described in [Adding Tables (or Partitions)](#adding-tables-or-partitions).
To add a new index to an empty table, follow the same steps as described in [Add Tables (or Partitions)](#add-tables-or-partitions).

However, to add a new index to a table that already has data, the following additional steps are required to ensure that the index has all the updates:

Expand Down Expand Up @@ -523,13 +523,13 @@ However, to add a new index to a table that already has data, the following addi
Replication altered successfully
```

#### Adding YCQL indexes in bidirectional replication
#### Add YCQL indexes in bidirectional replication

Stop all write traffic when adding a new index to a YCQL table that is bidirectionally replicated.

Follow the same steps as described in [Adding indexes in unidirectional replication](#adding-indexes-in-unidirectional-replication), followed by bootstrapping the index on the target universe and adding it to the source universe (steps 4 and 8 in the opposite direction).
Follow the same steps as described in [Add indexes in unidirectional replication](#add-indexes-in-unidirectional-replication), followed by bootstrapping the index on the target universe and adding it to the source universe (steps 4 and 8 in the opposite direction).

#### Adding YSQL indexes in bidirectional replication
#### Add YSQL indexes in bidirectional replication

New YSQL indexes are automatically added to xCluster replication if the YSQL table being indexed is bidirectionally replicated.
Adding new indexes is supported even if the table being indexed contains data and is actively receiving writes on both the universes.
Expand All @@ -540,7 +540,7 @@ Create the [index](../../../../api/ysql/the-sql-language/statements/ddl_create_i
If the create index DDL statement is only issued on one universe, it will timeout and fail.
{{< /note >}}

### Removing objects
### Remove objects

Objects (tables, indexes, partitions) need to be removed from replication before they can be dropped as follows:

Expand Down Expand Up @@ -593,26 +593,30 @@ Alters involving adding/removing columns or modifying data types require replica
```output
Replication enabled successfully
```
#### Adding a column with a non-volatile default value

#### Add a column with a non-volatile default value

When adding a new column with a (non-volatile) default expression, make sure to perform the schema modification on the target with the _computed_ default value.

For example, say you have a replicated table `test_table`.

1. Pause replication on both sides.
1. Execute add column command on the source:

```sql
ALTER TABLE test_table ADD COLUMN test_column TIMESTAMP DEFAULT NOW()
```

1. Run the preceding `ALTER TABLE` command with the computed default value on the target as follows:

- The computed default value can be retrieved from the `attmissingval` column in the `pg_attribute` catalog table.
- Retrieve the computed default value from the `attmissingval` column in the `pg_attribute` catalog table.

Example:

```sql
SELECT attmissingval FROM pg_attribute WHERE attrelid='test'::regclass AND attname='test_column';
```

```output
attmissingval
-------------------------------
Expand All @@ -621,6 +625,7 @@ For example, say you have a replicated table `test_table`.
```

- Execute the `ADD COLUMN` command on the target with the computed default value.

```sql
ALTER TABLE test ADD COLUMN test_column TIMESTAMP DEFAULT "2024-01-09 12:29:11.88894"
ALTER TABLE test ADD COLUMN test_column TIMESTAMP DEFAULT "2024-01-09 12:29:11.88894"
```
Original file line number Diff line number Diff line change
Expand Up @@ -47,15 +47,15 @@ Add tables to replication in the following sequence:
1. Create the table on the Standby.
1. Add the table to the replication.

For instructions on adding tables to replication, refer to [Adding tables (or partitions)](../async-deployment/#adding-tables-or-partitions).
For instructions on adding tables to replication, refer to [Add tables (or partitions)](../async-deployment/#add-tables-or-partitions).

### Drop table

Remove tables from replication in the following sequence:

1. Remove the table from replication.

For instructions on removing tables from replication, refer to [Removing objects](../async-deployment/#removing-objects).
For instructions on removing tables from replication, refer to [Remove objects](../async-deployment/#remove-objects).

1. Drop the table from both Primary and Standby databases separately.

Expand Down
70 changes: 55 additions & 15 deletions docs/content/stable/manage/upgrade-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ YugabyteDB is a distributed database that can be installed on multiple nodes. Up

The `data`, `log`, and `conf` directories are typically stored in a fixed location that remains unchanged during the upgrade process. This ensures that the cluster's data and configuration settings are preserved throughout the upgrade.

For more information on upgrading YugabyteDB, refer to [Upgrade FAQ](/stable/faq/operations-faq/#upgrade).

## Important information

{{< warning >}}
Expand All @@ -51,30 +53,40 @@ To upgrade YugabyteDB to a version based on a different version of PostgreSQL (f

- You can upgrade from one stable version to another in one go, even across major versions, as long as they are in the same major YSQL version. For information on performing major YSQL version upgrades, refer to [YSQL major upgrade](../ysql-major-upgrade-yugabyted/).

### Backups and point-in-time-recovery

- Backups

- Backups taken on a newer version cannot be restored to universes running a previous version.
- Backups taken during the upgrade cannot be restored to universes running a previous version.
- Backups taken before the upgrade _can_ be used for restore to the new version.

- [Point-in-time-restore](../backup-restore/point-in-time-recovery/) (PITR)
- [Point-in-time-recovery](../backup-restore/point-in-time-recovery/) (PITR)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hari90 @hulien22 Can you please check if this change is correct for OSS upgrade? i.e., while upgrading the OSS YBDB, the user doesn't need to delete the pitr config beforehand and YBDB can handle it as described?


When you start the [upgrade](#upgrade-phase), the PITR change history is invalidated. This means that after an upgrade starts, you will no longer be able to access or restore to any time before the upgrade was started - _regardless of the outcome of the upgrade_.

During the [monitoring phase](#monitor-phase) (that is, after upgrading but before finalizing or rolling back), any attempt to perform any PITR-based actions (such as rewind or clone a database to a point in time, back up and restore a database with PITR, or inspect a database at a point in time) will fail.

After [finalizing](#a-finalize-phase) or [rolling back](#b-rollback-phase) the upgrade, PITR-based actions become available again. However, keep in mind the following:

- If you have PITR enabled, you must disable it before performing an upgrade. Re-enable it only after the upgrade is either finalized or rolled back.
- After the upgrade, PITR cannot be done to a time before the upgrade.
- After finalizing, you cannot perform a PITR-based action targeting a time before the upgrade was started.
- After rollback, you cannot perform a PITR-based action targeting a time before the upgrade was started.

- YSQL
If PITR has been enabled on the YSQL database `yugabyte`, disable it before starting the upgrade.

- For additional information on upgrading universes that have Enhanced PostgreSQL Compatibility Mode, refer to [Enhanced PostgreSQL Compatibility Mode](../../reference/configuration/postgresql-compatibility/).
If you are performing a [major YSQL upgrade](../ysql-major-upgrade-yugabyted/), and have PITR enabled, delete the configuration before performing the upgrade. Recreate it only after the upgrade is either finalized or rolled back.

- For information on upgrading or enabling cost-based optimizer, refer to [Enable cost-based optimizer](../../best-practices-operations/ysql-yb-enable-cbo/).
### YSQL

If you upgrade to v2025.2 and the universe already has cost-based optimizer enabled, the following features are enabled by default:
- For additional information on upgrading universes that have Enhanced PostgreSQL Compatibility Mode, refer to [Enhanced PostgreSQL Compatibility Mode](../../reference/configuration/postgresql-compatibility/).

- Auto Analyze (ysql_enable_auto_analyze=true)
- YugabyteDB bitmap scan (yb_enable_bitmapscan=true)
- Parallel query (yb_enable_parallel_append=true)
- For information on upgrading or enabling cost-based optimizer, refer to [Enable cost-based optimizer](../../best-practices-operations/ysql-yb-enable-cbo/).

For more information, refer to [Upgrade FAQ](/stable/faq/operations-faq/#upgrade).
If you upgrade to v2025.2 and the universe already has cost-based optimizer enabled, the following features are enabled by default:

- Auto Analyze (ysql_enable_auto_analyze=true)
- YugabyteDB bitmap scan (yb_enable_bitmapscan=true)
- Parallel query (yb_enable_parallel_append=true)

## Upgrade YugabyteDB cluster

Expand Down Expand Up @@ -303,14 +315,42 @@ Use the following procedure to roll back all YB-Masters:

## Upgrades with xCluster

When you have unidirectional xCluster replication, it is recommended to upgrade the target cluster before the source. After the target cluster is upgraded and finalized, you can proceed to upgrade the source cluster.
### Unidirectional xCluster

If you have bidirectional xCluster replication, then you should upgrade and finalize both clusters at the same time. Perform the upgrade steps for each cluster individually and monitor both of them. If you encounter any issues, roll back both clusters. If everything appears to be in good condition, finalize both clusters with as little delay as possible.
When you have unidirectional xCluster replication, upgrade the target cluster before the source. After the target cluster is upgraded and finalized, you can proceed to upgrade the source cluster.

{{< note title="Note" >}}
xCluster replication requires the target cluster version to the same or later than the source cluster version. The setup of a new xCluster replication will fail if this check fails. Existing replications will automatically pause if the source cluster is finalized before the target cluster.
{{< note title="Target cluster version" >}}
xCluster replication requires the target cluster version to be the same or later than the source cluster version. Setup of a new xCluster replication will fail if this check fails. Existing replications will automatically pause if the source cluster is finalized before the target cluster.
{{< /note >}}

To upgrade clusters in transactional xCluster, the sequence is as follows:

1. Upgrade the target.
1. Finalize the upgrade on the target.
1. Validate that reads work on the target.
1. Upgrade the source.
1. Perform validation tests ([Monitor phase](#monitor-phase)).

Perform any application-level tests as needed (including, if needed, application-level failovers).

1. Finalize the upgrade on the source.

### Bidirectional xCluster

If you have bidirectional xCluster replication, then you should upgrade and finalize both clusters at the same time. Perform the upgrade steps for each cluster individually and monitor both of them. If you encounter any issues, roll back both clusters. If everything appears to be in good condition, finalize both clusters with as little delay as possible.

The sequence is as follows:

1. Upgrade B.
1. Upgrade A.
1. Perform validation tests ([Monitor phase](#monitor-phase) for both clusters).

Perform any application-level tests as needed (including, if needed, application-level failovers).

Note that any features that rely on [PITR](#backups-and-point-in-time-recovery) are not possible during the monitoring phase.

1. Finalize the upgrade on A, and finalize the upgrade on B. Do these as near simultaneously as possible.

## Advanced - enable volatile AutoFlags during monitoring

{{< warning title="Important" >}}
Expand Down
2 changes: 2 additions & 0 deletions docs/content/stable/manage/ysql-major-upgrade-local.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,8 @@ Performing a YSQL major upgrade on a universe with [CDC with logical replication
CREATE USER yugabyte_upgrade WITH SUPERUSER PASSWORD '<strong_password>';
```

- If you have PITR enabled, delete the configuration before performing the upgrade. Recreate it only after the major upgrade is either finalized or rolled back.

### Precheck

New PostgreSQL major versions add many new features and performance improvements, but also remove some older unsupported features and data types. You can only upgrade after you remove all deprecated features and data types from your databases.
Expand Down
2 changes: 2 additions & 0 deletions docs/content/stable/manage/ysql-major-upgrade-yugabyted.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,8 @@ Performing a YSQL major upgrade on a universe with [CDC with logical replication
CREATE USER yugabyte_upgrade WITH SUPERUSER PASSWORD '<strong_password>';
```

- If you have PITR enabled, delete the configuration before performing the upgrade. Recreate it only after the major upgrade is either finalized or rolled back.

### Precheck

New PostgreSQL major versions add many new features and performance improvements, but also remove some older unsupported features and data types. You can only upgrade after you remove all deprecated features and data types from your databases.
Expand Down
2 changes: 1 addition & 1 deletion docs/content/stable/releases/ybdb-releases/v2025.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -731,7 +731,7 @@ For instructions on enabling bitmap-scan support, refer to [enable_bitmapscan](/

- [Advisory locks](/v2025.1/explore/transactions/explicit-locking/#advisory-locks). Enables session-based and transactional advisory locks for coordination and concurrency control in distributed environments. {{<tags/feature/ga idea="812">}}

- [Non-disruptive adding of indexes for xCluster](/v2025.1/deploy/multi-dc/async-replication/async-transactional-tables/). Adding an index to a database configured with [bi-directional xCluster Replication](/v2025.1/deploy/multi-dc/async-replication/async-deployment/#adding-ysql-indexes-in-bidirectional-replication) (or other non-transactional xCluster Repolication) is now a non-disruptive, no-downtime operation. {{<tags/feature/ga idea="1536">}}
- [Non-disruptive adding of indexes for xCluster](/v2025.1/deploy/multi-dc/async-replication/async-transactional-tables/). Adding an index to a database configured with [bi-directional xCluster Replication](/v2025.1/deploy/multi-dc/async-replication/async-deployment/#add-ysql-indexes-in-bidirectional-replication) (or other non-transactional xCluster Replication) is now a non-disruptive, no-downtime operation. {{<tags/feature/ga idea="1536">}}

- [PITR-flashback query](/v2025.1/manage/backup-restore/time-travel-query/). Allows querying historical database state by specifying a past timestamp, aiding in auditing, debugging, and data analysis. {{<tags/feature/ea idea="1182">}}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,8 @@ When [upgrading universes](../../manage-deployments/upgrade-software-install/) i

Note that switchover operations can potentially fail if the DR primary and replica are at different versions.

Refer to [Upgrades with xCluster and xCluster DR](../../manage-deployments/upgrade-software-install/#upgrades-with-xcluster-and-xcluster-dr).

## xCluster DR vs xCluster Replication

xCluster refers to all YugabyteDB deployments with two or more universes, and has two major flavors:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,11 @@ Ensure the universes have the following characteristics:

PITR is used by DR during failover to restore the database to a consistent state. Note that if the DR replica universe already has PITR configured, that configuration is replaced by the DR configuration.

Prepare your database and tables on the DR primary. Make sure the database and tables aren't already being used for xCluster replication; databases and tables can only be used in one replication at a time. The DR primary can be empty or have data. If the DR primary has a lot of data, the DR setup will take longer because the data must be copied in full to the DR replica before on-going asynchronous replication starts.

During DR setup in semi-automatic mode, create objects on the DR replica as well.
- They have network connectivity; see [Networking for xCluster](../../../prepare/networking/#networking-for-xcluster). If the source and target universe Master and TServer nodes use DNS addresses, those addresses must be resolvable on all nodes.

DR performs a full copy of the data to be replicated on the DR primary, and restores data on the DR replica from the DR primary.
Before starting DR, YugabyteDB Anywhere verifies network connectivity from every node in the DR replica universe to every node in the DR primary universe to rule out VPC misconfigurations or other network issues.

After DR is configured, the DR replica is only be available for reads.
If your network policy blocks ping packets and you want to skip this connectivity precheck, you can disable it by setting **Enable network connectivity check for xCluster** Global Runtime Configuration option (config key `yb.xcluster.network_connectivity_check.enabled`) to `false`. Refer to [Manage runtime configuration settings](../../administer-yugabyte-platform/manage-runtime-config/). Note that only a Super Admin user can modify Global configuration settings.

### Best practices

Expand All @@ -53,6 +51,14 @@ After DR is configured, the DR replica is only be available for reads.

## Set up disaster recovery

Prepare your database and tables on the DR primary. Make sure the database and tables aren't already being used for xCluster replication; databases and tables can only be used in one replication at a time. The DR primary can be empty or have data. If the DR primary has a lot of data, the DR setup will take longer because the data must be copied in full to the DR replica before on-going asynchronous replication starts.

During DR setup in semi-automatic mode, create objects on the DR replica as well.

DR performs a full copy of the data to be replicated on the DR primary, and restores data on the DR replica from the DR primary.

After DR is configured, the DR replica is only available for reads.

To set up disaster recovery for a universe, do the following:

1. Navigate to your DR primary universe **xCluster Disaster Recovery** tab, and select the replication configuration.
Expand Down
Loading