Skip to content

Commit 66142a2

Browse files
authored
Ingress, scaling and storage enhencements (#114)
* Documented support and use of Ingress * Support to individually specify all PubSub+ scaling parameters - also fixes Enhancement: Add New Helm Chart Parameter for Broker Message Spool Limit #99 * Support to use single mount storage-group for the broker * Allow smaller Monitor pod CPU, Memory and storage requirements in an HA deployment * Fixed Helm error with custom service annotation - fixes issue [BUG] Helm error with custom service annotation #112
1 parent 78d40b6 commit 66142a2

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+5037
-3078
lines changed

.github/workflows/build-test.yml

Lines changed: 188 additions & 167 deletions
Large diffs are not rendered by default.

LICENSE

Lines changed: 201 additions & 201 deletions
Large diffs are not rendered by default.

README.md

Lines changed: 123 additions & 123 deletions
Large diffs are not rendered by default.

docs/PubSubPlusK8SDeployment.md

Lines changed: 1037 additions & 833 deletions
Large diffs are not rendered by default.

docs/helm-charts/create-chart-variants.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,5 +53,6 @@ for variant in '' '-dev' '-ha' ;
5353
sed -i 's%helm repo add.*%helm repo add openshift-helm-charts https://charts.openshift.io%g' pubsubplus-openshift"$variant"/README.md
5454
sed -i 's%solacecharts/pubsubplus%openshift-helm-charts/pubsubplus-openshift%g' pubsubplus-openshift"$variant"/README.md
5555
sed -i 's@`solace/solace-pubsub-standard`@`registry.connect.redhat.com/solace/pubsubplus-standard`@g' pubsubplus-openshift"$variant"/README.md
56+
sed -i 's/kubectl/oc/g' pubsubplus-openshift"$variant"/templates/NOTES.txt
5657
helm package pubsubplus-openshift"$variant"
5758
done

pubsubplus/.helmignore

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,21 @@
1-
# Patterns to ignore when building packages.
2-
# This supports shell glob matching, relative path matching, and
3-
# negation (prefixed with !). Only one pattern per line.
4-
.DS_Store
5-
# Common VCS dirs
6-
.git/
7-
.gitignore
8-
.bzr/
9-
.bzrignore
10-
.hg/
11-
.hgignore
12-
.svn/
13-
# Common backup files
14-
*.swp
15-
*.bak
16-
*.tmp
17-
*~
18-
# Various IDEs
19-
.project
20-
.idea/
21-
*.tmproj
1+
# Patterns to ignore when building packages.
2+
# This supports shell glob matching, relative path matching, and
3+
# negation (prefixed with !). Only one pattern per line.
4+
.DS_Store
5+
# Common VCS dirs
6+
.git/
7+
.gitignore
8+
.bzr/
9+
.bzrignore
10+
.hg/
11+
.hgignore
12+
.svn/
13+
# Common backup files
14+
*.swp
15+
*.bak
16+
*.tmp
17+
*~
18+
# Various IDEs
19+
.project
20+
.idea/
21+
*.tmproj

pubsubplus/Chart.yaml

Lines changed: 29 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,29 @@
1-
apiVersion: v2
2-
description: Deploy Solace PubSub+ Event Broker Singleton or HA redundancy group onto a Kubernetes Cluster
3-
name: pubsubplus
4-
version: 3.0.0
5-
icon: https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/images/PubSubPlus.png
6-
kubeVersion: '>= 1.10.0-0'
7-
maintainers:
8-
- name: Solace Community Forum
9-
url: https://solace.community/
10-
- name: Solace Support
11-
url: https://solace.com/support/
12-
home: https://dev.solace.com
13-
sources:
14-
- https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart
15-
keywords:
16-
- solace
17-
- pubsubplus
18-
- pubsub+
19-
- pubsub
20-
- messaging
21-
- advanced event broker
22-
- event broker
23-
- event mesh
24-
- event streaming
25-
- data streaming
26-
- event integration
27-
- middleware
28-
annotations:
29-
charts.openshift.io/name: PubSub+ Event Broker
1+
apiVersion: v2
2+
description: Deploy Solace PubSub+ Event Broker Singleton or HA redundancy group onto a Kubernetes Cluster
3+
name: pubsubplus
4+
version: 3.1.0
5+
icon: https://solaceproducts.github.io/pubsubplus-kubernetes-quickstart/images/PubSubPlus.png
6+
kubeVersion: '>= 1.10.0-0'
7+
maintainers:
8+
- name: Solace Community Forum
9+
url: https://solace.community/
10+
- name: Solace Support
11+
url: https://solace.com/support/
12+
home: https://dev.solace.com
13+
sources:
14+
- https://github.com/SolaceProducts/pubsubplus-kubernetes-quickstart
15+
keywords:
16+
- solace
17+
- pubsubplus
18+
- pubsub+
19+
- pubsub
20+
- messaging
21+
- advanced event broker
22+
- event broker
23+
- event mesh
24+
- event streaming
25+
- data streaming
26+
- event integration
27+
- middleware
28+
annotations:
29+
charts.openshift.io/name: PubSub+ Event Broker

pubsubplus/LICENSE

Lines changed: 201 additions & 201 deletions
Large diffs are not rendered by default.

pubsubplus/README.md

Lines changed: 117 additions & 113 deletions
Large diffs are not rendered by default.

pubsubplus/templates/NOTES.txt

Lines changed: 98 additions & 89 deletions
Original file line numberDiff line numberDiff line change
@@ -1,89 +1,98 @@
1-
2-
== Check Solace PubSub+ deployment progress ==
3-
Deployment is complete when a PubSub+ pod representing an active event broker node's label reports "active=true".
4-
Watch progress by running:
5-
kubectl get pods --namespace {{ .Release.Namespace }} --show-labels -w | grep {{ template "solace.fullname" . }}
6-
7-
For troubleshooting, refer to ***TroubleShooting.md***
8-
9-
== TLS support ==
10-
{{- if not .Values.tls.enabled }}
11-
TLS has not been enabled for this deployment.
12-
{{- else }}
13-
TLS is enabled, using secret {{ .Values.tls.serverCertificatesSecret }} for server certificates configuration.
14-
{{- end }}
15-
16-
== Admin credentials and access ==
17-
{{- if not .Values.solace.usernameAdminPassword }}
18-
*********************************************************************
19-
* An admin password was not specified and has been auto-generated.
20-
* You must retrieve it and provide it as value override
21-
* if using Helm upgrade otherwise your cluster will become unusable.
22-
*********************************************************************
23-
24-
{{- end }}
25-
Username : admin
26-
Admin password : echo `kubectl get secret --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }}-secrets -o jsonpath="{.data.username_admin_password}" | base64 --decode`
27-
Use the "semp" service address to access the management API via browser or a REST tool, see Services access below.
28-
29-
== Image used ==
30-
{{ .Values.image.repository }}:{{ .Values.image.tag }}
31-
32-
== Storage used ==
33-
{{- if and ( .Values.storage.persistent ) ( .Values.storage.useStorageClass ) }}
34-
Using persistent volumes via dynamic provisioning, ensure specified StorageClass exists: `kubectl get sc {{ .Values.storage.useStorageClass }}`
35-
{{- else if .Values.storage.persistent}}
36-
Using persistent volumes via dynamic provisioning with the "default" StorageClass, ensure it exists: `kubectl get sc | grep default`
37-
{{- end }}
38-
{{- if and ( not .Values.storage.persistent ) ( not .Values.storage.hostPath ) ( not .Values.storage.existingVolume ) }}
39-
*******************************************************************************
40-
* This deployment is using pod-local ephemeral storage.
41-
* Note that any configuration and stored messages will be lost at pod restart.
42-
*******************************************************************************
43-
For production purposes it is recommended to use persistent storage.
44-
{{- end }}
45-
46-
== Performance and resource requirements ==
47-
{{- if contains "dev" .Values.solace.size }}
48-
This is a minimum footprint deployment for development purposes. For guaranteed performance, specify a different solace.size value.
49-
{{- else }}
50-
The requested connection scaling tier for this deployment is: max {{ substr 4 10 .Values.solace.size }} connections.
51-
{{- end }}
52-
Following resources have been requested per PubSub+ pod:
53-
echo `kubectl get statefulset --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="Minimum resources: {.spec.template.spec.containers[0].resources.requests}"`
54-
55-
== Services access ==
56-
To access services from pods within the k8s cluster, use these addresses:
57-
58-
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t{{ template "solace.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{.port}\n"`
59-
60-
To access from outside the k8s cluster, perform the following steps.
61-
62-
{{- if contains "NodePort" .Values.service.type }}
63-
64-
Obtain the NodePort IP and service ports:
65-
66-
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[*].status.addresses[0].address}"); echo $NODE_IP
67-
# Use following ports with any of the NodeIPs
68-
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t<NodeIP>:{.nodePort}\n"`
69-
70-
{{- else if contains "LoadBalancer" .Values.service.type }}
71-
72-
Obtain the LoadBalancer IP and the service addresses:
73-
NOTE: At initial deployment it may take a few minutes for the LoadBalancer IP to be available.
74-
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "solace.fullname" . }}'
75-
76-
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}"); echo SERVICE_IP=$SERVICE_IP
77-
# Ensure valid SERVICE_IP is returned:
78-
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t$SERVICE_IP:{.port}\n"`
79-
80-
{{- else if contains "ClusterIP" .Values.service.type }}
81-
82-
NOTE: The specified k8s service type for this deployment is "ClusterIP" and it is not exposing services externally.
83-
84-
For local testing purposes you can use port-forward in a background process to map pod ports to local host, then use these service addresses:
85-
86-
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "solace.fullname" . }} $(echo `kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.targetPort}:{.port} "`) &
87-
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t127.0.0.1:{.targetPort}\n"`
88-
89-
{{- end }}
1+
2+
== Check Solace PubSub+ deployment progress ==
3+
Deployment is complete when a PubSub+ pod representing an active event broker node's label reports "active=true".
4+
Watch progress by running:
5+
kubectl get pods --namespace {{ .Release.Namespace }} --show-labels -w | grep {{ template "solace.fullname" . }}
6+
7+
For troubleshooting, refer to ***TroubleShooting.md***
8+
9+
== TLS support ==
10+
{{- if not .Values.tls.enabled }}
11+
TLS has not been enabled for this deployment.
12+
{{- else }}
13+
TLS is enabled, using secret {{ .Values.tls.serverCertificatesSecret }} for server certificates configuration.
14+
{{- end }}
15+
16+
== Admin credentials and access ==
17+
{{- if not .Values.solace.usernameAdminPassword }}
18+
*********************************************************************
19+
* An admin password was not specified and has been auto-generated.
20+
* You must retrieve it and provide it as value override
21+
* if using Helm upgrade otherwise your cluster will become unusable.
22+
*********************************************************************
23+
24+
{{- end }}
25+
Username : admin
26+
Admin password : echo `kubectl get secret --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }}-secrets -o jsonpath="{.data.username_admin_password}" | base64 --decode`
27+
Use the "semp" service address to access the management API via browser or a REST tool, see Services access below.
28+
29+
== Image used ==
30+
{{ .Values.image.repository }}:{{ .Values.image.tag }}
31+
32+
== Storage used ==
33+
{{- if and ( .Values.storage.persistent ) ( .Values.storage.useStorageClass ) }}
34+
Using persistent volumes via dynamic provisioning, ensure specified StorageClass exists: `kubectl get sc {{ .Values.storage.useStorageClass }}`
35+
{{- else if .Values.storage.persistent}}
36+
Using persistent volumes via dynamic provisioning with the "default" StorageClass, ensure it exists: `kubectl get sc | grep default`
37+
{{- end }}
38+
{{- if and ( not .Values.storage.persistent ) ( not .Values.storage.hostPath ) ( not .Values.storage.existingVolume ) }}
39+
*******************************************************************************
40+
* This deployment is using pod-local ephemeral storage.
41+
* Note that any configuration and stored messages will be lost at pod restart.
42+
*******************************************************************************
43+
For production purposes it is recommended to use persistent storage.
44+
{{- end }}
45+
46+
== Performance and resource requirements ==
47+
{{- if .Values.solace.systemScaling }}
48+
Max supported number of client connections: {{ .Values.solace.systemScaling.maxConnections }}
49+
Max number of queue messages, in millions of messages: {{ .Values.solace.systemScaling.maxQueueMessages }}
50+
Max spool usage, in MB: {{ .Values.solace.systemScaling.maxSpoolUsage }}
51+
Requested cpu, in cores: {{ .Values.solace.systemScaling.cpu }}
52+
Requested memory: {{ .Values.solace.systemScaling.memory }}
53+
Requested storage: {{ .Values.storage.size }}
54+
{{- else }}
55+
{{- if contains "dev" .Values.solace.size }}
56+
This is a minimum footprint deployment for development purposes. For guaranteed performance, specify a different solace.size value.
57+
{{- else }}
58+
The requested connection scaling tier for this deployment is: max {{ substr 4 10 .Values.solace.size }} connections.
59+
{{- end }}
60+
Following resources have been requested per PubSub+ pod:
61+
echo `kubectl get statefulset --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="Minimum resources: {.spec.template.spec.containers[0].resources.requests}"`
62+
{{- end }}
63+
64+
== Services access ==
65+
To access services from pods within the k8s cluster, use these addresses:
66+
67+
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t{{ template "solace.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{.port}\n"`
68+
69+
To access from outside the k8s cluster, perform the following steps.
70+
71+
{{- if contains "NodePort" .Values.service.type }}
72+
73+
Obtain the NodePort IP and service ports:
74+
75+
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[*].status.addresses[0].address}"); echo $NODE_IP
76+
# Use following ports with any of the NodeIPs
77+
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t<NodeIP>:{.nodePort}\n"`
78+
79+
{{- else if contains "LoadBalancer" .Values.service.type }}
80+
81+
Obtain the LoadBalancer IP and the service addresses:
82+
NOTE: At initial deployment it may take a few minutes for the LoadBalancer IP to be available.
83+
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "solace.fullname" . }}'
84+
85+
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}"); echo SERVICE_IP=$SERVICE_IP
86+
# Ensure valid SERVICE_IP is returned:
87+
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t$SERVICE_IP:{.port}\n"`
88+
89+
{{- else if contains "ClusterIP" .Values.service.type }}
90+
91+
NOTE: The specified k8s service type for this deployment is "ClusterIP" and it is not exposing services externally.
92+
93+
For local testing purposes you can use port-forward in a background process to map pod ports to local host, then use these service addresses:
94+
95+
kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "solace.fullname" . }} $(echo `kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.targetPort}:{.port} "`) &
96+
echo -e "\nProtocol\tAddress\n"`kubectl get svc --namespace {{ .Release.Namespace }} {{ template "solace.fullname" . }} -o jsonpath="{range .spec.ports[*]}{.name}\t127.0.0.1:{.targetPort}\n"`
97+
98+
{{- end }}

0 commit comments

Comments
 (0)