You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/add-a-cluster.rst
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,18 +4,18 @@ Add a Cluster
4
4
5
5
**Prerequisites**
6
6
7
-
* Scylla Manager Agent is up and running on all Scylla nodes.
7
+
* ScyllaDB Manager Agent is up and running on all ScyllaDB nodes.
8
8
* All the Agents have the same :ref:`authentication token <configure-auth-token>` configured.
9
-
* Traffic on the following ports is unblocked from the Scylla Manager Server to all the Scylla nodes.
9
+
* Traffic on the following ports is unblocked from the ScyllaDB Manager Server to all the ScyllaDB nodes.
10
10
11
-
* ``10001`` - Scylla Manager Agent REST API (HTTPS)
11
+
* ``10001`` - ScyllaDB Manager Agent REST API (HTTPS)
12
12
* CQL port (typically ``9042``) - required for CQL health check status reports
13
13
14
14
.. _add-cluster:
15
15
16
16
**Procedure**
17
17
18
-
#. From the Scylla Manager Server, provide the IP address of one of the nodes, the generated auth token, and a custom name.
18
+
#. From the ScyllaDB Manager Server, provide the IP address of one of the nodes, the generated auth token, and a custom name.
19
19
20
20
Example (IPv4):
21
21
@@ -48,7 +48,7 @@ Add a Cluster
48
48
* ``--host`` is hostname or IP of one of the cluster nodes. You can use an IPv6 or an IPv4 address.
49
49
* ``--name`` is an alias you can give to your cluster.
50
50
Using an alias means you do not need to use the ID of the cluster in all other operations.
51
-
This name must be used when connecting the managed cluster to Scylla Monitor, but does not have to be the same name you used in scylla.yaml.
51
+
This name must be used when connecting the managed cluster to ScyllaDB Monitor, but does not have to be the same name you used in scylla.yaml.
52
52
* ``--auth-token`` is the :ref:`authentication token <configure-auth-token>` you generated.
53
53
54
54
Each cluster has a unique ID (UUID), you will see it printed to stdout in ``sctool cluster add`` output when the cluster is added.
@@ -60,7 +60,7 @@ Add a Cluster
60
60
This can be canceled using ``--without-repair``.
61
61
To use a different repair schedule, see :ref:`Schedule a Repair <schedule-a-repair>`.
62
62
63
-
Scylla manager requires CQL credentials to the cluster with ``--username`` and ``--password`` flags.
63
+
ScyllaDB manager requires CQL credentials to the cluster with ``--username`` and ``--password`` flags.
64
64
This enables :ref:`CQL query based health check <cql-query-health-check>` compared to :ref:`credentials agnostic health check <credentials-agnostic-health-check>` if you do not specify the credentials.
65
65
This also enables CQL schema backup in text format, which isn't performed if credentials aren't provided. Restore uses the backed up schema as part of the restore process.
66
66
For security reasons the CQL user should NOT have access to read your data.
@@ -88,7 +88,7 @@ Add a Cluster
88
88
89
89
.. note:: If you want to change the schedule for the repair, use the :ref:`repair update sctool <reschedule-a-repair>` command.
90
90
91
-
#. Verify Scylla Manager can communicate with all the Agents, and the the cluster status is OK by running the ``sctool status`` command.
91
+
#. Verify ScyllaDB Manager can communicate with all the Agents, and the the cluster status is OK by running the ``sctool status`` command.
Copy file name to clipboardExpand all lines: docs/source/backup/index.rst
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,7 +83,7 @@ Backup location
83
83
===============
84
84
85
85
You need to create a backup location for example an S3 bucket.
86
-
We recommend creating it in the same region as Scylla nodes to minimize cross region data transfer costs.
86
+
We recommend creating it in the same region as ScyllaDB nodes to minimize cross region data transfer costs.
87
87
In multi-dc deployments you should create a bucket per datacenter, each located in the datacenter's region.
88
88
89
89
Details may differ depending on the storage engine, please consult:
@@ -98,10 +98,10 @@ Removing backups
98
98
99
99
Backups may require a lot of storage space. They are purged according to the retention defined on the backup task.
100
100
101
-
`Sctool` can be used to remove snapshots of clusters that are no longer managed by Scylla Manager.
102
-
The removal process is performed through the Scylla Manager Agent installed on Scylla nodes.
101
+
`Sctool` can be used to remove snapshots of clusters that are no longer managed by ScyllaDB Manager.
102
+
The removal process is performed through the ScyllaDB Manager Agent installed on ScyllaDB nodes.
103
103
104
-
However, it's recommended to delete the snapshots from the storage before removing the cluster from Scylla Manager.
104
+
However, it's recommended to delete the snapshots from the storage before removing the cluster from ScyllaDB Manager.
105
105
Otherwise, you will need to add the cluster again, list the snapshots in the given location, and remove them using the new cluster as the coordinator.
106
106
Another option is to purge them manually. If you want to remove the snapshots manually, please refer to the :doc:`backup specification <specification>`
Copy file name to clipboardExpand all lines: docs/source/backup/setup-azure-blobstorage.rst
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,17 +12,17 @@ Create a container
12
12
==================
13
13
14
14
Go to `Azure Portal <https://portal.azure.com/>`_ and create a new container within your storage account.
15
-
This container should be only used for storing Scylla Manager backups.
15
+
This container should be only used for storing ScyllaDB Manager backups.
16
16
If your cluster is deployed in multiple regions create a storage account and container per region.
17
17
You may decide to backup only a single datacenter to save on costs, in that case create only one storage account and container in a region you want to backup.
18
18
19
19
Grant access
20
20
============
21
21
22
-
This procedure is required so that Scylla Manager can access your containers.
22
+
This procedure is required so that ScyllaDB Manager can access your containers.
23
23
24
24
Choose how you want to configure access to the container.
25
-
You can use an `IAM role`_ (recommended) or you can add storage account credentials (account/key) to the Scylla Manager Agent configuration file.
25
+
You can use an `IAM role`_ (recommended) or you can add storage account credentials (account/key) to the ScyllaDB Manager Agent configuration file.
26
26
The latter method is not recommended because you are placing the security information directly on each node, which is much less secure than the IAM role method. In addition, if you need to change the key, you will have to replace it on every node.
27
27
28
28
IAM role
@@ -75,7 +75,7 @@ You can use permissions from the provided sample but make sure to set proper val
75
75
Config file
76
76
-----------
77
77
78
-
Note that this procedure needs to be repeated for each Scylla node.
78
+
Note that this procedure needs to be repeated for each ScyllaDB node.
79
79
80
80
**Procedure**
81
81
@@ -91,7 +91,7 @@ Edit the ``/etc/scylla-manager-agent/scylla-manager-agent.yaml``
Copy file name to clipboardExpand all lines: docs/source/backup/setup-gcs.rst
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,14 +9,14 @@ Setup Google Cloud Storage
9
9
Create a bucket
10
10
===============
11
11
12
-
Go to `Google Cloud Storage <https://cloud.google.com/storage>`_ and create a new bucket in a region where Scylla nodes are.
12
+
Go to `Google Cloud Storage <https://cloud.google.com/storage>`_ and create a new bucket in a region where ScyllaDB nodes are.
13
13
If your cluster is deployed in multiple regions create a bucket per region.
14
14
You may decide to backup only a single datacenter to save on costs, in that case create only one bucket in a region you want to backup.
15
15
16
16
Grant access
17
17
============
18
18
19
-
This procedure is required so that Scylla Manager can access your bucket.
19
+
This procedure is required so that ScyllaDB Manager can access your bucket.
20
20
21
21
Choose how you want to configure access to the bucket.
22
22
If your application runs inside a Google Cloud environment we recommend using automatic Service account authentication.
@@ -28,14 +28,14 @@ Automatic service account authorization
28
28
29
29
**Procedure**
30
30
31
-
#. Collect list of `service accounts <https://cloud.google.com/compute/docs/access/service-accounts>`_ used by **each** of the Scylla nodes.
31
+
#. Collect list of `service accounts <https://cloud.google.com/compute/docs/access/service-accounts>`_ used by **each** of the ScyllaDB nodes.
32
32
#. Make sure that each of service account has read/write `access scope <https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam>`_ to Cloud Storage.
33
33
#. For each service account from the list, add `Storage Object Admin role <https://cloud.google.com/storage/docs/access-control/iam-roles>`_ in bucket permissions settings.
34
34
35
35
Service account file
36
36
--------------------
37
37
38
-
Note that this procedure needs to be repeated for each Scylla node.
38
+
Note that this procedure needs to be repeated for each ScyllaDB node.
39
39
40
40
**Prerequisites**
41
41
@@ -44,7 +44,7 @@ Use `this instruction <https://cloud.google.com/docs/authentication/production#m
44
44
**Procedure**
45
45
46
46
#. Upload service account file to ``/etc/scylla-manager-agent/gcs-service-account.json``.
47
-
If you want to use different path change service_account_file parameter in ``gcs`` section in :doc:`Scylla Manager Agent Config file <../config/scylla-manager-agent-config>`.
47
+
If you want to use different path change service_account_file parameter in ``gcs`` section in :doc:`ScyllaDB Manager Agent Config file <../config/scylla-manager-agent-config>`.
48
48
#. Validate that the manager has access to the backup location.
49
49
If there is no response, the bucket is accessible. If not, you will see an error.
Copy file name to clipboardExpand all lines: docs/source/backup/specification.rst
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,8 +9,8 @@ Specification
9
9
Directory Layout
10
10
----------------
11
11
12
-
The Scylla Manager backup requires a backup location string that specifies the storage provider and name of a bucket (using Amazon S3 naming) ex. ``s3:<your S3 bucket name>``.
13
-
In that bucket Scylla Manager creates a ``backup`` directory where all the backup data and metadata are stored.
12
+
The ScyllaDB Manager backup requires a backup location string that specifies the storage provider and name of a bucket (using Amazon S3 naming) ex. ``s3:<your S3 bucket name>``.
13
+
In that bucket ScyllaDB Manager creates a ``backup`` directory where all the backup data and metadata are stored.
14
14
15
15
There are three subdirectories:
16
16
@@ -253,7 +253,7 @@ while ``mc-5-big-Data.db.sm_20210809095541UTC`` will be used when restoring ``sm
253
253
Manifest File
254
254
-------------
255
255
256
-
Scylla Manager Manifest files are gzipped JSON files.
256
+
ScyllaDB Manager Manifest files are gzipped JSON files.
257
257
Each node has it's own manifest file.
258
258
If a cluster has three nodes a backup would contain three manifest files with the same name but under different directories.
259
259
Please find below the contents of the manifest file of the node shown in the sst section.
@@ -324,7 +324,7 @@ Please find below the contents of the manifest file of the node shown in the sst
324
324
The manifest contains the following information.
325
325
326
326
* version - the version of the manifest
327
-
* cluster_name - name of the cluster as registered in Scylla Manager
327
+
* cluster_name - name of the cluster as registered in ScyllaDB Manager
328
328
* ip - public IP address of the node
329
329
* index - list of tables, each table holds a list of file names
0 commit comments