Skip to content

Commit e272eaf

Browse files
CopilotCopilotMichal-Leszczynski
authored
refactor(docs): replace "Scylla" with "ScyllaDB" (#4775)
* docs: Replace 'Scylla' with 'ScyllaDB' in documentation Replace 'Scylla' with 'ScyllaDB' in all .rst files in docs/source/ directory following context-aware rules: Replaced: - Product names (Scylla Manager, Scylla node, Scylla cluster, etc.) - Narrative text describing the database product - Link text descriptions Preserved: - Package names (scylla-manager, scylla-manager-agent) - File paths (/etc/scylla, /var/lib/scylla) - Configuration files (scylla.yaml) - Environment variables (SCYLLA_MANAGER_CLUSTER) - Command flags (--no-scylla-setup) - Table column headers in command output - Code blocks - URLs - SSTable filenames (big-Scylla.db) Total: 166 replacements across 28 files Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * docs: Fix RST title underlines after Scylla to ScyllaDB replacement Co-authored-by: Michal-Leszczynski <74614433+Michal-Leszczynski@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: Michal-Leszczynski <74614433+Michal-Leszczynski@users.noreply.github.com>
1 parent 15b9726 commit e272eaf

29 files changed

Lines changed: 150 additions & 150 deletions
Binary file not shown.

docs/source/add-a-cluster.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,18 @@ Add a Cluster
44

55
**Prerequisites**
66

7-
* Scylla Manager Agent is up and running on all Scylla nodes.
7+
* ScyllaDB Manager Agent is up and running on all ScyllaDB nodes.
88
* All the Agents have the same :ref:`authentication token <configure-auth-token>` configured.
9-
* Traffic on the following ports is unblocked from the Scylla Manager Server to all the Scylla nodes.
9+
* Traffic on the following ports is unblocked from the ScyllaDB Manager Server to all the ScyllaDB nodes.
1010

11-
* ``10001`` - Scylla Manager Agent REST API (HTTPS)
11+
* ``10001`` - ScyllaDB Manager Agent REST API (HTTPS)
1212
* CQL port (typically ``9042``) - required for CQL health check status reports
1313

1414
.. _add-cluster:
1515

1616
**Procedure**
1717

18-
#. From the Scylla Manager Server, provide the IP address of one of the nodes, the generated auth token, and a custom name.
18+
#. From the ScyllaDB Manager Server, provide the IP address of one of the nodes, the generated auth token, and a custom name.
1919

2020
Example (IPv4):
2121

@@ -48,7 +48,7 @@ Add a Cluster
4848
* ``--host`` is hostname or IP of one of the cluster nodes. You can use an IPv6 or an IPv4 address.
4949
* ``--name`` is an alias you can give to your cluster.
5050
Using an alias means you do not need to use the ID of the cluster in all other operations.
51-
This name must be used when connecting the managed cluster to Scylla Monitor, but does not have to be the same name you used in scylla.yaml.
51+
This name must be used when connecting the managed cluster to ScyllaDB Monitor, but does not have to be the same name you used in scylla.yaml.
5252
* ``--auth-token`` is the :ref:`authentication token <configure-auth-token>` you generated.
5353

5454
Each cluster has a unique ID (UUID), you will see it printed to stdout in ``sctool cluster add`` output when the cluster is added.
@@ -60,7 +60,7 @@ Add a Cluster
6060
This can be canceled using ``--without-repair``.
6161
To use a different repair schedule, see :ref:`Schedule a Repair <schedule-a-repair>`.
6262

63-
Scylla manager requires CQL credentials to the cluster with ``--username`` and ``--password`` flags.
63+
ScyllaDB manager requires CQL credentials to the cluster with ``--username`` and ``--password`` flags.
6464
This enables :ref:`CQL query based health check <cql-query-health-check>` compared to :ref:`credentials agnostic health check <credentials-agnostic-health-check>` if you do not specify the credentials.
6565
This also enables CQL schema backup in text format, which isn't performed if credentials aren't provided. Restore uses the backed up schema as part of the restore process.
6666
For security reasons the CQL user should NOT have access to read your data.
@@ -88,7 +88,7 @@ Add a Cluster
8888

8989
.. note:: If you want to change the schedule for the repair, use the :ref:`repair update sctool <reschedule-a-repair>` command.
9090

91-
#. Verify Scylla Manager can communicate with all the Agents, and the the cluster status is OK by running the ``sctool status`` command.
91+
#. Verify ScyllaDB Manager can communicate with all the Agents, and the the cluster status is OK by running the ``sctool status`` command.
9292

9393
.. code-block:: none
9494

docs/source/backup/index.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ Backup location
8383
===============
8484

8585
You need to create a backup location for example an S3 bucket.
86-
We recommend creating it in the same region as Scylla nodes to minimize cross region data transfer costs.
86+
We recommend creating it in the same region as ScyllaDB nodes to minimize cross region data transfer costs.
8787
In multi-dc deployments you should create a bucket per datacenter, each located in the datacenter's region.
8888

8989
Details may differ depending on the storage engine, please consult:
@@ -98,10 +98,10 @@ Removing backups
9898

9999
Backups may require a lot of storage space. They are purged according to the retention defined on the backup task.
100100

101-
`Sctool` can be used to remove snapshots of clusters that are no longer managed by Scylla Manager.
102-
The removal process is performed through the Scylla Manager Agent installed on Scylla nodes.
101+
`Sctool` can be used to remove snapshots of clusters that are no longer managed by ScyllaDB Manager.
102+
The removal process is performed through the ScyllaDB Manager Agent installed on ScyllaDB nodes.
103103

104-
However, it's recommended to delete the snapshots from the storage before removing the cluster from Scylla Manager.
104+
However, it's recommended to delete the snapshots from the storage before removing the cluster from ScyllaDB Manager.
105105
Otherwise, you will need to add the cluster again, list the snapshots in the given location, and remove them using the new cluster as the coordinator.
106106
Another option is to purge them manually. If you want to remove the snapshots manually, please refer to the :doc:`backup specification <specification>`
107107
and remove them accordingly.

docs/source/backup/setup-amazon-s3.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,14 @@ Setup Amazon S3
99
Create a bucket
1010
===============
1111

12-
Go to `Amazon S3 <https://aws.amazon.com/s3/>`_ and create a new bucket in a region where Scylla nodes are.
12+
Go to `Amazon S3 <https://aws.amazon.com/s3/>`_ and create a new bucket in a region where ScyllaDB nodes are.
1313
If your cluster is deployed in multiple regions create a bucket per region.
1414
You may decide to backup only a single datacenter to save on costs, in that case create only one bucket in a region you want to backup.
1515

1616
Grant access
1717
============
1818

19-
This procedure is required so that Scylla Manager can access your bucket.
19+
This procedure is required so that ScyllaDB Manager can access your bucket.
2020

2121
Choose how you want to configure access to the bucket.
2222
You can use an IAM role (recommended) or you can add your credentials to the agent configuration file.
@@ -69,7 +69,7 @@ Sample IAM policy for *scylla-manager-backup* bucket:
6969
Config file
7070
-----------
7171

72-
Note that this procedure needs to be repeated for each Scylla node.
72+
Note that this procedure needs to be repeated for each ScyllaDB node.
7373

7474
**Procedure**
7575

@@ -86,7 +86,7 @@ Edit the ``/etc/scylla-manager-agent/scylla-manager-agent.yaml``
8686
8787
scylla-manager-agent check-location --location s3:<your S3 bucket name>
8888
89-
#. Restart Scylla Manager Agent service.
89+
#. Restart ScyllaDB Manager Agent service.
9090

9191
.. code-block:: none
9292
@@ -97,12 +97,12 @@ Additional features
9797

9898
You can enable additional Amazon S3 features such as **server side encryption** or **transfer acceleration**.
9999
Those need to be enabled on per Agent basis in the configuration file.
100-
Check out the ``s3`` section in :doc:`Scylla Manager Agent Config file <../config/scylla-manager-agent-config>`.
100+
Check out the ``s3`` section in :doc:`ScyllaDB Manager Agent Config file <../config/scylla-manager-agent-config>`.
101101

102102
Troubleshoot connectivity
103103
=========================
104104

105-
To troubleshoot Scylla node to bucket connectivity issues you can run:
105+
To troubleshoot ScyllaDB node to bucket connectivity issues you can run:
106106

107107
.. code-block:: none
108108

docs/source/backup/setup-azure-blobstorage.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,17 +12,17 @@ Create a container
1212
==================
1313

1414
Go to `Azure Portal <https://portal.azure.com/>`_ and create a new container within your storage account.
15-
This container should be only used for storing Scylla Manager backups.
15+
This container should be only used for storing ScyllaDB Manager backups.
1616
If your cluster is deployed in multiple regions create a storage account and container per region.
1717
You may decide to backup only a single datacenter to save on costs, in that case create only one storage account and container in a region you want to backup.
1818

1919
Grant access
2020
============
2121

22-
This procedure is required so that Scylla Manager can access your containers.
22+
This procedure is required so that ScyllaDB Manager can access your containers.
2323

2424
Choose how you want to configure access to the container.
25-
You can use an `IAM role`_ (recommended) or you can add storage account credentials (account/key) to the Scylla Manager Agent configuration file.
25+
You can use an `IAM role`_ (recommended) or you can add storage account credentials (account/key) to the ScyllaDB Manager Agent configuration file.
2626
The latter method is not recommended because you are placing the security information directly on each node, which is much less secure than the IAM role method. In addition, if you need to change the key, you will have to replace it on every node.
2727

2828
IAM role
@@ -75,7 +75,7 @@ You can use permissions from the provided sample but make sure to set proper val
7575
Config file
7676
-----------
7777

78-
Note that this procedure needs to be repeated for each Scylla node.
78+
Note that this procedure needs to be repeated for each ScyllaDB node.
7979

8080
**Procedure**
8181

@@ -91,7 +91,7 @@ Edit the ``/etc/scylla-manager-agent/scylla-manager-agent.yaml``
9191
9292
scylla-manager-agent check-location --location azure:<blob storage container name>
9393
94-
#. Restart Scylla Manager Agent service.
94+
#. Restart ScyllaDB Manager Agent service.
9595

9696
.. code-block:: none
9797
@@ -100,7 +100,7 @@ Edit the ``/etc/scylla-manager-agent/scylla-manager-agent.yaml``
100100
Troubleshoot connectivity
101101
=========================
102102

103-
To troubleshoot Scylla node to bucket connectivity issues you can run:
103+
To troubleshoot ScyllaDB node to bucket connectivity issues you can run:
104104

105105
.. code-block:: none
106106

docs/source/backup/setup-gcs.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,14 @@ Setup Google Cloud Storage
99
Create a bucket
1010
===============
1111

12-
Go to `Google Cloud Storage <https://cloud.google.com/storage>`_ and create a new bucket in a region where Scylla nodes are.
12+
Go to `Google Cloud Storage <https://cloud.google.com/storage>`_ and create a new bucket in a region where ScyllaDB nodes are.
1313
If your cluster is deployed in multiple regions create a bucket per region.
1414
You may decide to backup only a single datacenter to save on costs, in that case create only one bucket in a region you want to backup.
1515

1616
Grant access
1717
============
1818

19-
This procedure is required so that Scylla Manager can access your bucket.
19+
This procedure is required so that ScyllaDB Manager can access your bucket.
2020

2121
Choose how you want to configure access to the bucket.
2222
If your application runs inside a Google Cloud environment we recommend using automatic Service account authentication.
@@ -28,14 +28,14 @@ Automatic service account authorization
2828

2929
**Procedure**
3030

31-
#. Collect list of `service accounts <https://cloud.google.com/compute/docs/access/service-accounts>`_ used by **each** of the Scylla nodes.
31+
#. Collect list of `service accounts <https://cloud.google.com/compute/docs/access/service-accounts>`_ used by **each** of the ScyllaDB nodes.
3232
#. Make sure that each of service account has read/write `access scope <https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam>`_ to Cloud Storage.
3333
#. For each service account from the list, add `Storage Object Admin role <https://cloud.google.com/storage/docs/access-control/iam-roles>`_ in bucket permissions settings.
3434

3535
Service account file
3636
--------------------
3737

38-
Note that this procedure needs to be repeated for each Scylla node.
38+
Note that this procedure needs to be repeated for each ScyllaDB node.
3939

4040
**Prerequisites**
4141

@@ -44,7 +44,7 @@ Use `this instruction <https://cloud.google.com/docs/authentication/production#m
4444
**Procedure**
4545

4646
#. Upload service account file to ``/etc/scylla-manager-agent/gcs-service-account.json``.
47-
If you want to use different path change service_account_file parameter in ``gcs`` section in :doc:`Scylla Manager Agent Config file <../config/scylla-manager-agent-config>`.
47+
If you want to use different path change service_account_file parameter in ``gcs`` section in :doc:`ScyllaDB Manager Agent Config file <../config/scylla-manager-agent-config>`.
4848
#. Validate that the manager has access to the backup location.
4949
If there is no response, the bucket is accessible. If not, you will see an error.
5050

docs/source/backup/setup-s3-compatible-storage.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Setup S3 compatible storage
66
:depth: 2
77
:local:
88

9-
There are multiple S3 API compatible providers that can be used with Scylla Manager.
9+
There are multiple S3 API compatible providers that can be used with ScyllaDB Manager.
1010
Due to minor differences between them we require that exact provider is specified in the config file for full compatibility.
1111
The available providers are Alibaba, AWS, Ceph, DigitalOcean, IBMCOS, Minio, Wasabi, Dreamhost, Netease.
1212

@@ -18,8 +18,8 @@ You need to create a bucket in your storage system of choice.
1818
Grant access
1919
============
2020

21-
This procedure is required so that Scylla Manager can access your bucket.
22-
You need to configure bucket access policy in your storage system and set credentials in the Scylla Manager Agent config file.
21+
This procedure is required so that ScyllaDB Manager can access your bucket.
22+
You need to configure bucket access policy in your storage system and set credentials in the ScyllaDB Manager Agent config file.
2323

2424
Policy
2525
------
@@ -60,7 +60,7 @@ Given `myminio` is an alias for your MinIO deployment.
6060
Config file
6161
-----------
6262

63-
Note that this procedure needs to be repeated for each Scylla node.
63+
Note that this procedure needs to be repeated for each ScyllaDB node.
6464

6565
**Procedure**
6666

@@ -77,7 +77,7 @@ Edit the ``/etc/scylla-manager-agent/scylla-manager-agent.yaml``
7777
7878
scylla-manager-agent check-location --location s3:<your S3 bucket name>
7979
80-
#. Restart Scylla Manager Agent service.
80+
#. Restart ScyllaDB Manager Agent service.
8181

8282
.. code-block:: none
8383

docs/source/backup/specification.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ Specification
99
Directory Layout
1010
----------------
1111

12-
The Scylla Manager backup requires a backup location string that specifies the storage provider and name of a bucket (using Amazon S3 naming) ex. ``s3:<your S3 bucket name>``.
13-
In that bucket Scylla Manager creates a ``backup`` directory where all the backup data and metadata are stored.
12+
The ScyllaDB Manager backup requires a backup location string that specifies the storage provider and name of a bucket (using Amazon S3 naming) ex. ``s3:<your S3 bucket name>``.
13+
In that bucket ScyllaDB Manager creates a ``backup`` directory where all the backup data and metadata are stored.
1414

1515
There are three subdirectories:
1616

@@ -253,7 +253,7 @@ while ``mc-5-big-Data.db.sm_20210809095541UTC`` will be used when restoring ``sm
253253
Manifest File
254254
-------------
255255

256-
Scylla Manager Manifest files are gzipped JSON files.
256+
ScyllaDB Manager Manifest files are gzipped JSON files.
257257
Each node has it's own manifest file.
258258
If a cluster has three nodes a backup would contain three manifest files with the same name but under different directories.
259259
Please find below the contents of the manifest file of the node shown in the sst section.
@@ -324,7 +324,7 @@ Please find below the contents of the manifest file of the node shown in the sst
324324
The manifest contains the following information.
325325

326326
* version - the version of the manifest
327-
* cluster_name - name of the cluster as registered in Scylla Manager
327+
* cluster_name - name of the cluster as registered in ScyllaDB Manager
328328
* ip - public IP address of the node
329329
* index - list of tables, each table holds a list of file names
330330
* size - total size of files in index
Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11

2-
* Healthcheck - which checks the Scylla CQL, repeating every 15 seconds.
3-
* Healthcheck Alternator - which checks the Scylla Alternator API, repeating every 15 seconds.
4-
* Healthcheck REST - which checks the Scylla REST API, repeating every minute.
2+
* Healthcheck - which checks the ScyllaDB CQL, repeating every 15 seconds.
3+
* Healthcheck Alternator - which checks the ScyllaDB Alternator API, repeating every 15 seconds.
4+
* Healthcheck REST - which checks the ScyllaDB REST API, repeating every minute.

docs/source/config/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,4 @@ Configuration Files
99
scylla-manager-config
1010
scylla-manager-agent-config
1111

12-
This page is where you will find information on Scylla Manager and Scylla Manager Agent configuration files.
12+
This page is where you will find information on ScyllaDB Manager and ScyllaDB Manager Agent configuration files.

0 commit comments

Comments
 (0)