You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Nfs example + doc updates for OpenShift newer releases, also fixes issue #45 (#48)
* Adjusting dev100 cpu requirements so pod can be scheduled on a 1CPU node
* Updating helm chart templates
* Fix delete example in documentation
* Adjust individual names for ClusterRole and ClusterRoleBinding
* Added nfs support
* Added -r switch to restore helm
* Changed dev100 persist to work on Minikube - use default storage class
* Added doc updates for OpenShift newer releases
* Fixed missing timing info in log for config-sync-check at <show><redundancy><detail>
* Updated maintainer details
Copy file name to clipboardExpand all lines: README.md
+28-13
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ This repository explains how to install a Solace PubSub+ Software Message Broker
9
9
This document is applicable to any platform supporting Kubernetes, with specific hints on how to set up a simple single-node MiniKube deployment on a Unix-based machine. To view examples of other platforms see:
10
10
11
11
-[Deploying a Solace PubSub+ Software Message Broker HA group onto a Google Kubernetes Engine](https://github.com/SolaceProducts/solace-gke-quickstart)
12
-
-[Deploying a Solace PubSub+ Software Message Broker HA Group onto an OpenShift 3.7 or 3.9 platform](https://github.com/SolaceProducts/solace-openshift-quickstart)
12
+
-[Deploying a Solace PubSub+ Software Message Broker HA Group onto an OpenShift 3.10 or 3.11 platform](https://github.com/SolaceProducts/solace-openshift-quickstart)
13
13
- Deploying a Solace PubSub+ Software Message Broker HA Group onto Amazon EKS (Amazon Elastic Container Service for Kubernetes): follow the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) to set up EKS then this guide to deploy.
14
14
15
15
## Description of the Solace PubSub+ Software Message Broker
@@ -62,18 +62,18 @@ To load the docker image into a docker registry, follow the steps specific to th
62
62
63
63
Deploy message broker Pods and Service to the cluster.
64
64
65
-
The [Kubernetes `helm`](https://github.com/kubernetes/helm/blob/master/README.md) tool is used to manage this deployment. A deployment is defined by a "helm chart", which consists of templates and values. The values specify the particular configuration properties in the templates.
65
+
The [Kubernetes Helm](https://github.com/kubernetes/helm/blob/master/README.md) tool is used to manage this deployment. A deployment is defined by a "Helm chart", which consists of templates and values. The values specify the particular configuration properties in the templates.
66
66
67
67
The following diagram illustrates the template structure used for the Solace Deployment chart. Note that the minimum is shown in this diagram to give you some background regarding the relationships and major functions.
cd solace-kubernetes-quickstart/solace # location of the solace Helm chart
77
77
```
78
78
79
79
* Next, prepare your environment and customize your chart by executing the `configure.sh` script and pass it the required parameters:
@@ -85,7 +85,7 @@ cd solace-kubernetes-quickstart/solace # location of the solace helm chart
85
85
|`-c`| OPTIONAL: The cloud environment you will be running in, current options are [aws\|gcp]. NOTE: if you are not using dynamic provisioned persistent disks, or, if you are running a local MiniKube environment, this option can be left out. |
86
86
|`-v`| OPTIONAL: The path to a `values.yaml` example/custom file to use. The default file is `values-examples/dev100-direct-noha.yaml`|
87
87
88
-
The location of the `configure.sh` script is in the `../scripts` directory, relative to the `solace` chart. Executing the configuration script will install the required version of the `helm` tool if needed, as well as customize the `solace`helm chart to your desired configuration.
88
+
The location of the `configure.sh` script is in the `../scripts` directory, relative to the `solace` chart. Executing the configuration script will install the required version of the Helm tool if needed, as well as customize the `solace`Helm chart to your desired configuration.
89
89
90
90
When customizing the `solace` chart by the script, the `values.yaml` located in the root of the chart will be replaced with what is specified in the argument `-v <value-file>`. A number of examples are provided in the `values-examples/` directory, for details refer to [this section](#other-message-broker-deployment-configurations).
91
91
@@ -103,7 +103,7 @@ cd ~/workspace/solace-kubernetes-quickstart/solace
* Finally, use `helm` to install the deployment from the `solace` chart location, using your generated `values.yaml` file:
106
+
* Finally, use Helm to install the deployment from the `solace` chart location, using your generated `values.yaml` file:
107
107
108
108
```sh
109
109
cd~/workspace/solace-kubernetes-quickstart/solace
@@ -116,7 +116,7 @@ To modify a deployment, refer to the section [Upgrading/modifying the message br
116
116
117
117
### Validate the Deployment
118
118
119
-
Now you can validate your deployment on the command line. In this example an HA cluster is deployed with po/XXX-XXX-solace-0 being the active message broker/pod. The notation XXX-XXX is used for the unique release name that `helm` dynamically generates, e.g: "tinseled-lamb".
119
+
Now you can validate your deployment on the command line. In this example an HA cluster is deployed with po/XXX-XXX-solace-0 being the active message broker/pod. The notation XXX-XXX is used for the unique release name that Helm dynamically generates, e.g: "tinseled-lamb".
120
120
121
121
```sh
122
122
prompt:~$ kubectl get statefulsets,services,pods,pvc,pv
@@ -251,7 +251,20 @@ Use the external Public IP to access the cluster. If a port required for a proto
251
251
252
252
## Upgrading/modifying the message broker cluster
253
253
254
-
To upgrade/modify the message broker cluster, make the required modifications to the chart in the `solace-kubernetes-quickstart/solace` directory as described next, then run the `helm` tool from here. When passing multiple `-f <values-file>` to helm, the override priority will be given to the last (right-most) file specified.
254
+
To upgrade/modify the message broker cluster, make the required modifications to the chart in the `solace-kubernetes-quickstart/solace` directory as described next, then run the Helm tool from here. When passing multiple `-f <values-file>` to Helm, the override priority will be given to the last (right-most) file specified.
255
+
256
+
### Restoring Helm if not available
257
+
258
+
Before getting into the details of how to make changes to a deployment, it shall be noted that when using a new machine to access the deployment Helm may not be available. This can be the case when e.g. using cloud shell, which may be terminated any time.
259
+
260
+
To restore Helm, run the configure command with the -r option:
261
+
262
+
```
263
+
cd ~/workspace/solace-kubernetes-quickstart/solace
264
+
../scripts/configure.sh -r
265
+
```
266
+
267
+
Now Helm shall be available, e.g: `helm list` shall no longer return an error message.
255
268
256
269
### Upgrading the cluster
257
270
@@ -344,14 +357,15 @@ Use Helm to delete a deployment, also called a release:
344
357
helm delete XXX-XXX
345
358
```
346
359
347
-
> Note: In some versions, Helm may return an error even if the deletion was successful.
348
-
349
-
Check what has remained from the deployment, which should only return a single line with svc/kubernetes.
360
+
Check what has remained from the deployment, which should only return a single line with svc/kubernetes:
350
361
351
362
```
352
363
kubectl get statefulsets,services,pods,pvc,pv
364
+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
365
+
service/kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP XX
353
366
```
354
-
> Note: In some versions, Helm may not be able to clean up all the deployment artifacts, e.g.: pvc/ and pv/. Check their existence with `kubectl get all` and if necessary, use `kubectl delete` to delete those.
367
+
368
+
> Note: In some versions, Helm may not be able to clean up all the deployment artifacts, e.g.: pvc/ and pv/. If necessary, use `kubectl delete` to delete those.
355
369
356
370
## Other Message Broker Deployment Configurations
357
371
@@ -363,6 +377,7 @@ The `solace-kubernetes-quickstart/solace/values-examples` directory provides exa
363
377
*`prod1k-direct-noha-localDirectory`: production, up to 1000 connections, non-HA, bind the PVC to a local directory on the host node
364
378
*`prod1k-direct-noha-provisionPvc`: production, up to 1000 connections, non-HA, bind the PVC to a provisioned PersistentVolume (PV) in Kubernetes
365
379
*`prod1k-persist-ha-provisionPvc`: production, up to 1000 connections, HA, to bind the PVC to a provisioned PersistentVolume (PV) in Kubernetes
380
+
*`prod1k-persist-ha-nfs`: production, up to 1000 connections, HA, to dynamically bind the PVC to an NFS volume provided by an NFS server, exposed as storage class `nfs`. Note: "root_squash" configuration is supported on the NFS server.
366
381
367
382
Similar value-files can be defined extending above examples:
echo"`date` INFO: Installed helm $(helm version --client --short)"
114
+
else
115
+
echo"Automated install of helm is not supported on Windows. Please refer to https://github.com/helm/helm#install to install it manually then re-run this script."
116
+
exit -1
117
+
fi
107
118
fi
108
119
109
120
# Deploy tiller
@@ -137,20 +148,25 @@ else
137
148
cd solace
138
149
fi
139
150
140
-
# Ensure current dir is within the chart - e.g solace-kubernetes-quickstart/solace
141
-
if [ !-d"templates" ];then
142
-
echo"`date` INFO: Must be in the chart directory, exiting. Current dir is $(pwd)."
143
-
exit -1
151
+
# Copy and customize values.xml
152
+
if [[ "${values_file}"!="" ]];then
153
+
# Ensure current dir is within the chart - e.g solace-kubernetes-quickstart/solace
154
+
if [ !-d"templates" ];then
155
+
echo"`date` INFO: Must be in the chart directory, exiting. Current dir is $(pwd)."
0 commit comments