Skip to content

Commit 45854d0

Browse files
authored
Minor docs update (#1361)
- Update grafana dashboard install - Add operational procedure for migrating to galera replication mode
1 parent 844818b commit 45854d0

File tree

2 files changed

+49
-4
lines changed

2 files changed

+49
-4
lines changed

docs/import-grafana-dashboard.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,9 @@ options:
3131
Name of the Prometheus datasource. Default: "Prometheus"
3232

3333
export GRAFANA_USERNAME=admin
34-
export GRAFANA_URL=https://grafana.sjc3.rackspacecloud.com
35-
export GRAFANA_PASSWORD=your_admin_password
34+
export GRAFANA_URL=`awk -F': ' '/custom_host/{print "https://" $2}' /etc/genestack/helm-configs/grafana/grafana-helm-overrides.yaml`
35+
export GRAFANA_PASSWORD=`kubectl -n grafana get secret grafana -o jsonpath='{.data.admin-password}' |base64 -d`
3636

37-
python import_dashboards.py --dir /opt/genestack/etc/grafana-dashboards/ --datasource Prometheus
37+
source /opt/genestack/scripts/genestack.rc
38+
python3 /opt/genestack/scripts/import-grafana-dashboard.py --dir /opt/genestack/etc/grafana-dashboards/ --datasource Prometheus
3839
```

docs/infrastructure-mariadb-ops.md

Lines changed: 45 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ for more information.
158158
maria-restore True Success mariadb-cluster 26s
159159
```
160160

161-
## Fixing Replication
161+
## Fixing Master-Slave Replication
162162

163163
The MariaDB Operator can handle most cluster issues automatically, but
164164
sometimes you’ll need to roll up your sleeves and step in to fix things.
@@ -256,3 +256,47 @@ Identify master log file and position from the backup file:
256256
2025-01-28 22:22:55 64 [Note] Master 'mariadb-operator': Slave SQL thread initialized, starting replication in log 'FIRST' at position 4, relay log './mariadb-cluster-relay-bin-mariadb@002doperator.000001' position: 4; GTID position '0-11-638858622'
257257
2025-01-28 22:22:55 63 [Note] Master 'mariadb-operator': Slave I/O thread: connected to master 'repl@mariadb-cluster-1.mariadb-cluster-internal.openstack.svc.cluster.local:3306',replication starts at GTID position '0-11-638858622'
258258
```
259+
260+
## Switching from master/slave replication to galera replication mode
261+
262+
In case the mariadb cluster was originially setup using master/slave
263+
replication, a switch to galera replication is only possible with a fresh bootstrapped cluster.
264+
The procedure below will rebuild the entire database and restore the database from
265+
the most recent backup.
266+
267+
!!! warning
268+
Please ensure that you create a database backup before deleting the cluster
269+
and that your mariadb operator is running with the version 0.38.1 and higher [see pr #1250](https://github.com/rackerlabs/genestack/pull/1250),
270+
before switching the replication mode. Otherwise no automatic failover will work for the galera cluster.
271+
272+
Check the operator versions with
273+
``` shell
274+
kubectl -n mariadb-system get pods -o="custom-columns=NAME:.spec.containers[0].name,IMAGE:.spec.containers[0].image"
275+
```
276+
277+
``` shell
278+
# Delete the database and persistent volumes
279+
kubectl -n openstack delete mariadb/mariadb-cluster
280+
kubectl -n openstack delete pvc -l app.kubernetes.io/instance=mariadb-cluster
281+
282+
# Rebuild the cluster with galera replication
283+
kubectl -n openstack apply -k /etc/genestack/kustomize/mariadb-cluster/galera
284+
285+
kubectl -n openstack wait mariadb mariadb-cluster --for=condition=Ready
286+
287+
# Restore the database from the last backup
288+
kubectl -n openstack apply -f - <<EOT
289+
apiVersion: k8s.mariadb.com/v1alpha1
290+
kind: Restore
291+
metadata:
292+
name: restore-mariadb
293+
spec:
294+
mariaDbRef:
295+
name: mariadb-cluster
296+
backupRef:
297+
name: mariadb-backup
298+
EOT
299+
300+
# Delete the restore job
301+
kubectl -n openstack delete restore/restore-mariadb
302+
```

0 commit comments

Comments
 (0)