Skip to content

Commit 2c84428

Browse files
ZDM Helm charts for Kubernetes
1 parent 08142af commit 2c84428

23 files changed

+10478
-0
lines changed

grafana-dashboards/ZDM Proxy Dashboard v2.json

+9,283
Large diffs are not rendered by default.

kubernetes/README.md

+56
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
Usage:
2+
3+
1. Create dedicated namespace for ZDM Proxy.
4+
5+
```kubectl create ns zdm-proxy```
6+
7+
2. Update `values.yaml` file to reflect configuration of your ZDM proxy environment. All files should be placed
8+
in the same directory as values file. Helm chart will automatically create Kubernetes secrets for passwords,
9+
TLS certificates and Secure Connection Bundle.
10+
11+
3. Install Helm chart in desired Kubernetes namespace.
12+
13+
```helm -n zdm-proxy install zdm-proxy zdm```
14+
15+
The default resource allocations (memory and CPU) are designed for production environment,
16+
if you see PODs pending due to not enough resources, try to use the following commands instead:
17+
18+
```
19+
helm -n zdm-proxy install --set resources.requests.cpu=1000m --set resources.requests.memory=2000Mi \
20+
--set resources.limits.cpu=1000m --set resources.limits.memory=2000Mi zdm-proxy zdm
21+
```
22+
23+
4. Verify that all components are up and running.
24+
25+
```kubectl -n zdm-proxy get svc,ep,po,secret -o wide --show-labels```
26+
27+
You can also run `kubectl -n zdm-proxy logs pod/zdm-proxy-0` to see if there are the following entries in the log,
28+
which means ZDM Proxy is working as expected:
29+
30+
```
31+
time="2022-12-14T21:19:57Z" level=info msg="Proxy connected and ready to accept queries on 172.25.132.116:9042"
32+
time="2022-12-14T21:19:57Z" level=info msg="Proxy started. Waiting for SIGINT/SIGTERM to shutdown."
33+
```
34+
35+
5. Optionally you can install monitoring components defined in `monitoring` subfolder.
36+
37+
6. To generate example load, you can use [NoSQLBench](https://docs.nosqlbench.io/) tool. Check out deployment scripts in `nosqlbench` subfolder.
38+
39+
7. Basic ZDM Proxy operations.
40+
41+
- Switch primary cluster to target (all proxy pods will automatically roll-restart after the change).
42+
43+
```helm -n zdm-proxy upgrade zdm-proxy zdm --set primaryCluster=TARGET```
44+
45+
- Scale to different number of proxy pods.
46+
47+
```helm -n zdm-proxy upgrade zdm-proxy ./zdm --set count=5```
48+
49+
Note: if you've already switched primary cluster to target, make sure you add `--set primaryCluster=TARGET`
50+
in this command line as well. An alternative is to directly edit `zdm/values.yaml` then run Helm upgrade.
51+
52+
8. When you're done, run helm uninstall to remove all objects.
53+
54+
```helm -n zdmproxy uninstall zdm-proxy```
55+
56+
![Demo](zdm-k8s-ccm-astra.gif)

kubernetes/demo.tape

+104
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
# record with: $ vhs demo.tape
2+
3+
Output zdm-k8s-ccm-astra.gif
4+
5+
Set FontSize 12
6+
Set Width 800
7+
Set Height 600
8+
9+
Type "ls"
10+
Enter
11+
Sleep 500ms
12+
13+
Type "cd zdm"
14+
Enter
15+
Sleep 500ms
16+
17+
Type "cp ~/Downloads/secure-connect-test.zip ./secure-connect-bundle-target.zip"
18+
Enter
19+
Sleep 200ms
20+
21+
Type "echo 'Change username and password of C* clusters in values.yaml'"
22+
Enter
23+
Sleep 1s
24+
25+
Type "echo 'Change contact point of origin cluster to host IP address of Minikube (host.minikube.internal), e.g. 192.168.65.254'"
26+
Enter
27+
Sleep 1s
28+
29+
Type "cd .."
30+
Enter
31+
Sleep 200ms
32+
33+
Type "ccm create demo -v 4.1.5 -n 1"
34+
Enter
35+
Sleep 2s
36+
37+
Type "ccm start"
38+
Enter
39+
Sleep 5s
40+
41+
Type "ccm updateconf authenticator:PasswordAuthenticator"
42+
Enter
43+
Sleep 1s
44+
45+
Type "ccm updateconf broadcast_rpc_address:192.168.65.254"
46+
Enter
47+
Sleep 1s
48+
49+
Type "ccm stop && ccm start"
50+
Enter
51+
Sleep 10s
52+
53+
Type "ccm node1 cqlsh -u cassandra -p cassandra"
54+
Enter
55+
Sleep 500ms
56+
57+
Type "CREATE KEYSPACE IF NOT EXISTS test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true;"
58+
Enter
59+
Sleep 700ms
60+
61+
Type "CREATE ROLE 'KjaeRUazLmLjPYYYjfikZyIB' with SUPERUSER = true and LOGIN = true and PASSWORD = '1GDI7naKuFY.SSdWu4j,4OB79-sPtLDZAzr.Gjsb5_135n1SjJz+y1Zs6jKGdem7-XZWknhWJzHlSFrZ631U.3X1iloZt.ZNP971yHC4Dfi6o1qvbDKs0_zdUhZ_-KDF';"
62+
Enter
63+
Sleep 700ms
64+
65+
Type "exit"
66+
Enter
67+
Sleep 200ms
68+
69+
Type "kubectl create ns zdm-proxy"
70+
Enter
71+
Sleep 500ms
72+
73+
Type "helm -n zdm-proxy install zdm-proxy zdm"
74+
Enter
75+
Sleep 3s
76+
77+
Type "kubectl get pods -n zdm-proxy"
78+
Enter
79+
Sleep 500ms
80+
81+
Type "kubectl logs zdm-proxy-0 -n zdm-proxy"
82+
Enter
83+
Sleep 3s
84+
85+
Type "kubectl apply -f ./monitoring"
86+
Enter
87+
Sleep 2s
88+
89+
Type "kubectl get pods -n zdm-proxy"
90+
Enter
91+
Sleep 500ms
92+
93+
Type "echo 'Change username and password of C* in nosqlbench.yml'"
94+
Enter
95+
Sleep 1s
96+
97+
Type "kubectl apply -f ./nosqlbench"
98+
Enter
99+
Sleep 3s
100+
101+
Type "kubectl get pods -n zdm-proxy"
102+
Enter
103+
Sleep 5s
104+

kubernetes/monitoring/README.md

+22
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
Usage:
2+
3+
1. Install Prometheus and Grafana in `zdm-proxy` namespace to monitor ZDM proxy instances.
4+
Please note that Grafana dashboards are not imported automatically.
5+
6+
```
7+
kubectl create ns zdm-proxy
8+
kubectl apply -f ./monitoring
9+
```
10+
11+
2. If you are running on Minikube, you can access Prometheus and Grafana URLs:
12+
13+
```
14+
minikube service prometheus -n zdm-proxy --url
15+
minikube service grafana -n zdm-proxy --url
16+
```
17+
18+
2. To remove all Prometheus and Grafana components, execute:
19+
20+
```
21+
kubectl delete -f ./monitoring
22+
```
+20
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
kind: ConfigMap
2+
metadata:
3+
name: grafana-datasources
4+
namespace: zdm-proxy
5+
apiVersion: v1
6+
data:
7+
ZDM-Prometheus.yaml: |+
8+
apiVersion: 1
9+
10+
deleteDatasources:
11+
- name: "Prometheus"
12+
13+
datasources:
14+
- name: Prometheus
15+
type: prometheus
16+
access: proxy
17+
url: http://prometheus:9090
18+
isDefault: true
19+
version: 1
20+
editable: true
+13
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: grafana
5+
namespace: zdm-proxy
6+
spec:
7+
type: NodePort
8+
ports:
9+
- port: 3000
10+
targetPort: 3000
11+
name: client
12+
selector:
13+
app: grafana

kubernetes/monitoring/grafana.yml

+38
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
apiVersion: apps/v1
2+
kind: Deployment
3+
metadata:
4+
name: grafana
5+
namespace: zdm-proxy
6+
spec:
7+
selector:
8+
matchLabels:
9+
app: grafana
10+
replicas: 1
11+
template:
12+
metadata:
13+
labels:
14+
app: grafana
15+
spec:
16+
containers:
17+
- name: grafana
18+
env:
19+
- name: GF_AUTH_ANONYMOUS_ENABLED
20+
value: "true"
21+
image: grafana/grafana
22+
ports:
23+
- containerPort: 3000
24+
name: client
25+
resources:
26+
requests:
27+
cpu: 500m
28+
memory: 128Mi
29+
limits:
30+
cpu: 500m
31+
memory: 256Mi
32+
volumeMounts:
33+
- name: grafana-datasources
34+
mountPath: /etc/grafana/provisioning/datasources/
35+
volumes:
36+
- name: grafana-datasources
37+
configMap:
38+
name: grafana-datasources
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
apiVersion: v1
2+
kind: ConfigMap
3+
metadata:
4+
name: prometheus-config
5+
namespace: zdm-proxy
6+
data:
7+
prometheus.yml: |-
8+
global:
9+
scrape_interval: 15s
10+
scrape_configs:
11+
- job_name: 'zdm_proxy'
12+
scrape_interval: 5s
13+
static_configs:
14+
- targets: ['zdm-proxy-metrics-0:14001', 'zdm-proxy-metrics-1:14001', 'zdm-proxy-metrics-2:14001']
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: prometheus
5+
namespace: zdm-proxy
6+
spec:
7+
type: NodePort
8+
ports:
9+
- port: 9090
10+
targetPort: 9090
11+
name: prometheus
12+
selector:
13+
app: prometheus

kubernetes/monitoring/prometheus.yml

+35
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
apiVersion: apps/v1
2+
kind: Deployment
3+
metadata:
4+
name: prometheus
5+
namespace: zdm-proxy
6+
spec:
7+
selector:
8+
matchLabels:
9+
app: prometheus
10+
replicas: 1
11+
template:
12+
metadata:
13+
labels:
14+
app: prometheus
15+
spec:
16+
containers:
17+
- name: prometheus
18+
image: prom/prometheus
19+
ports:
20+
- containerPort: 9090
21+
name: client
22+
resources:
23+
requests:
24+
cpu: 200m
25+
memory: 256Mi
26+
limits:
27+
cpu: 300m
28+
memory: 256Mi
29+
volumeMounts:
30+
- name: prometheus-config
31+
mountPath: /etc/prometheus/
32+
volumes:
33+
- name: prometheus-config
34+
configMap:
35+
name: prometheus-config

kubernetes/nosqlbench/README.md

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
> Deployment scripts support only username and password authentication against ZDM proxy.
2+
> Adjust credentials in _nosqlbench.yml_ file.
3+
4+
Usage:
5+
6+
1. Install [NoSQLBench](https://docs.nosqlbench.io/) and run a simple workload towards ZDM proxy. Note that
7+
after completing the test, pod is not terminated.
8+
9+
```
10+
kubectl create ns zdm-proxy
11+
kubectl apply -f ./nosqlbench
12+
```
13+
14+
2. To remove all deployed artifacts, execute:
15+
16+
```
17+
kubectl delete -f ./nosqlbench
18+
```

0 commit comments

Comments
 (0)