|
| 1 | +# Event Store |
| 2 | + |
| 3 | +[Event Store](https://eventstore.org/) is an open-source, |
| 4 | +functional database with Complex Event Processing in JavaScript. |
| 5 | + |
| 6 | +## TL;DR; |
| 7 | + |
| 8 | +```shell |
| 9 | +helm repo add ameier38 https://ameier38.github.io/charts |
| 10 | +helm repo update |
| 11 | +helm install -n eventstore ameier38/eventstore |
| 12 | +``` |
| 13 | + |
| 14 | +> The default username and password for the admin interface |
| 15 | +> is `admin:changeit`. |
| 16 | +
|
| 17 | +## Introduction |
| 18 | + |
| 19 | +This chart bootstraps a [Event Store](https://hub.docker.com/r/eventstore/eventstore/) |
| 20 | +deployment on a [Kubernetes](http://kubernetes.io) cluster |
| 21 | +using the [Helm](https://helm.sh) package manager. |
| 22 | + |
| 23 | +## Prerequisites |
| 24 | + |
| 25 | +- Kubernetes 1.4+ with Beta APIs enabled |
| 26 | +- PV provisioner support in the underlying infrastructure (Only when persisting data) |
| 27 | + |
| 28 | +## Installing the Chart |
| 29 | + |
| 30 | +Add the Event Store Charts repository. |
| 31 | + |
| 32 | +```shell |
| 33 | +helm repo add ameier38 https://ameier38.github.io/charts |
| 34 | +helm repo update |
| 35 | +``` |
| 36 | + |
| 37 | +To install the Event Store chart with the release name `eventstore`: |
| 38 | + |
| 39 | +```shell |
| 40 | +helm install -n eventstore ameier38/eventstore |
| 41 | +``` |
| 42 | + |
| 43 | +The above commands install Event Store with the default configuration. |
| 44 | +The [configuration](#configuration) section below lists the parameters |
| 45 | +that can be configured during installation. |
| 46 | + |
| 47 | +## Deleting the Chart |
| 48 | + |
| 49 | +Delete the `eventstore` release. |
| 50 | + |
| 51 | +```shell |
| 52 | +helm delete eventstore --purge |
| 53 | +``` |
| 54 | + |
| 55 | +This command removes all the Kubernetes components |
| 56 | +associated with the chart and deletes the release. |
| 57 | + |
| 58 | +## Configuration |
| 59 | + |
| 60 | +The following table lists the configurable parameters of the Event Store chart and their default values. |
| 61 | + |
| 62 | +| Parameter | Description | Default | |
| 63 | +| ------------------------------------ | ----------------------------------------------------------------------------- | ---------------------------- | |
| 64 | +| `image` | Container image name | `eventstore/eventstore` | |
| 65 | +| `imageTag` | Container image tag | `release-4.1.1-hotfix1` | |
| 66 | +| `imagePullPolicy` | Container pull policy | `IfNotPresent` | |
| 67 | +| `imagePullSecrets` | Specify image pull secrets | `nil` | |
| 68 | +| `clusterSize` | The number of nodes in the cluster | `3` | |
| 69 | +| `admin.serviceType` | Service type for the admin interface | `ClusterIP` | |
| 70 | +| `admin.proxyImage` | NGINX image for admin interface proxy | `nginx` | |
| 71 | +| `admin.proxyImageTag` | NGINX image tag | `latest` | |
| 72 | +| `podDisruptionBudget.enabled` | Enable a pod disruption budget for nodes | `false` | |
| 73 | +| `podDisruptionBudget.minAvailable` | Number of pods that must still be available after eviction | `2` | |
| 74 | +| `podDisruptionBudget.maxUnavailable` | Number of pods that can be unavailable after eviction | `nil` | |
| 75 | +| `extIp` | External IP address | `0.0.0.0` | |
| 76 | +| `intHttpPort` | Internal HTTP port | `2112` | |
| 77 | +| `extHttpPort` | External HTTP port | `2113` | |
| 78 | +| `intTcpPort` | Internal TCP port | `1112` | |
| 79 | +| `extTcpPort` | External TCP port | `1113` | |
| 80 | +| `gossipAllowedDiffMs` | The amount of drift, in ms, between clocks on nodes before gossip is rejected | `600000` | |
| 81 | +| `eventStoreConfig` | Additional Event Store parameters | `{}` | |
| 82 | +| `scavenging.enabled` | Enable the scavenging CronJob for all nodes | `false` | |
| 83 | +| `scavenging.image` | The image to use for the scavenging CronJob | `lachlanevenson/k8s-kubectl` | |
| 84 | +| `scavenging.imageTag` | The image tag use for the scavenging CronJob | `latest` | |
| 85 | +| `scavenging.schedule` | The schedule to use for the scavenging CronJob | `0 2 * * *` | |
| 86 | +| `persistence.enabled` | Enable persistence using PVC | `false` | |
| 87 | +| `persistence.existingClaim` | Provide an existing PVC | `nil` | |
| 88 | +| `persistence.accessMode` | Access Mode for PVC | `ReadWriteOnce` | |
| 89 | +| `persistence.size` | Size of data volume | `8Gi` | |
| 90 | +| `persistence.mountPath` | Mount path of data volume | `/var/lib/eventstore` | |
| 91 | +| `persistence.annotations` | Annotations for PVC | `{}` | |
| 92 | +| `resources` | CPU/Memory resource request/limits | Memory: `256Mi`, CPU: `100m` | |
| 93 | +| `nodeSelector` | Node labels for pod assignment | `{}` | |
| 94 | +| `podAnnotations` | Pod annotations | `{}` | |
| 95 | +| `tolerations` | Toleration labels for pod assignment | `[]` | |
| 96 | +| `affinity` | Affinity settings for pod assignment | `{}` | |
| 97 | + |
| 98 | +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install` |
| 99 | +or create a `values.yaml` file and use `helm install --values values.yaml`. |
| 100 | + |
| 101 | +## Scaling Persistent Volume |
| 102 | +After running Event Store for a while you may run into the situation where you have |
| 103 | +outgrown the initial volume. Below will walk you through the steps to check the disk |
| 104 | +usage and how to update if necessary. |
| 105 | + |
| 106 | +You can check the disk usage of the StatefulSet pods using the `df` command: |
| 107 | +```shell |
| 108 | +kubectl exec eventstore-0 df |
| 109 | +Filesystem 1K-blocks Used Available Use% Mounted on |
| 110 | +overlay 28056816 11530904 15326860 43% / |
| 111 | +tmpfs 65536 0 65536 0% /dev |
| 112 | +tmpfs 7976680 0 7976680 0% /sys/fs/cgroup |
| 113 | +/dev/nvme0n1p9 28056816 11530904 15326860 43% /etc/hosts |
| 114 | +shm 65536 4 65532 1% /dev/shm |
| 115 | +/dev/nvme1n1 8191416 2602160 5572872 32% /var/lib/eventstore --> PVC usage |
| 116 | +tmpfs 7976680 12 7976668 1% /run/secrets/kubernetes.io/serviceaccount |
| 117 | +tmpfs 7976680 0 7976680 0% /sys/firmware |
| 118 | +``` |
| 119 | +> If the `Use%` for `/var/lib/eventstore` mount is at an unacceptably high number, |
| 120 | +then follow one of the two options below depending on how your cluster is set up. |
| 121 | + |
| 122 | +### __Option 1__: Resize PVC created with volume expansion enabled |
| 123 | +You can check if volume expansion is enabled on the PVC StorageClass by running: |
| 124 | +```shell |
| 125 | +kubectl get storageclass <pvc storageclass> -o yaml |
| 126 | +``` |
| 127 | +You should see the following in the output if volume expansion is enabled: |
| 128 | +```yaml |
| 129 | +apiVersion: storage.k8s.io/v1 |
| 130 | +kind: StorageClass |
| 131 | +... |
| 132 | +allowVolumeExpansion: true |
| 133 | +... |
| 134 | +``` |
| 135 | +1. First resize the PVC for each StatefulSet pod. |
| 136 | + ```shell |
| 137 | + kubectl edit pvc data-eventstore-0 |
| 138 | + ``` |
| 139 | + > This will open up the specification in your default text editor. |
| 140 | + ```yaml |
| 141 | + ... |
| 142 | + spec: |
| 143 | + accessModes: |
| 144 | + - ReadWriteOnce |
| 145 | + resources: |
| 146 | + requests: |
| 147 | + storage: 8Gi --> change this to desired size |
| 148 | + ... |
| 149 | + ``` |
| 150 | + > Save and close the file after editing. If you get the error |
| 151 | + `only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize` |
| 152 | + then the storage class has not enabled volume expansion. No worries! Skip down to Option 2. |
| 153 | +2. Delete the StatefulSet but keep the pods. |
| 154 | + ```shell |
| 155 | + kubectl delete sts --cascade=false eventstore |
| 156 | + ``` |
| 157 | +3. Update the chart values with the new storage request value that you edited in step (1). |
| 158 | + ```shell |
| 159 | + helm upgrade eventstore eventstore/eventstore --set 'persistence.size=<value from step (1)>' |
| 160 | + ``` |
| 161 | + |
| 162 | +### __Option 2__: Resize PVC created without volume expansion enabled |
| 163 | +This process is a bit involved but also a good exercise for backing up the database. We will |
| 164 | +use AWS S3 as the storage backend but the process works just as well for other backends such as GCS. |
| 165 | +1. Connect to one of the StatefulSet pods. |
| 166 | + ```shell |
| 167 | + kubectl exec -it eventstore-0 sh |
| 168 | + ``` |
| 169 | +2. Install Python (required for AWS CLI). |
| 170 | + ```shell |
| 171 | + apt-get update |
| 172 | + apt-get install python3 |
| 173 | + export PATH="$PATH:/usr/local/bin" |
| 174 | + ``` |
| 175 | + > ref: https://docs.aws.amazon.com/cli/latest/userguide/install-linux-python.html |
| 176 | +3. Install pip. |
| 177 | + ```shell |
| 178 | + curl -O https://bootstrap.pypa.io/get-pip.py |
| 179 | + python3 get-pip.py |
| 180 | + ``` |
| 181 | +4. Install the AWS CLI. |
| 182 | + ```shell |
| 183 | + pip install awscli |
| 184 | + ``` |
| 185 | + > ref: https://docs.aws.amazon.com/cli/latest/userguide/install-linux.html |
| 186 | +5. Configure the AWS CLI. |
| 187 | + ```shell |
| 188 | + aws configure |
| 189 | + ``` |
| 190 | + > Enter your credentials when prompted. |
| 191 | +6. Dry run the copy. |
| 192 | + ```shell |
| 193 | + aws s3 cp --recursive /var/lib/eventstore/ s3://<bucket>/backup/eventstore/20190830/ --dryrun |
| 194 | + ``` |
| 195 | + > Change the S3 path to your preferred destination. If the copy operation looks good, proceed to the next step. |
| 196 | +7. Copy the files. |
| 197 | + ```shell |
| 198 | + aws s3 cp --recursive /var/lib/eventstore/ s3://<bucket>/backup/eventstore/20190830/ |
| 199 | + ``` |
| 200 | + > ref: https://eventstore.org/docs/server/database-backup/#backing-up-a-database |
| 201 | +8. Create a new Event Store cluster with the new volume size. It is recommended to set `allowVolumeExpansion: true` |
| 202 | +on your cluster's StorageClass prior to creating the cluster. This will make it easier to resize in the future by |
| 203 | +following the steps in Option 1 above. |
| 204 | +See [the documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims) |
| 205 | +for more details. |
| 206 | +9. Repeat steps (1) through (5) on the new cluster StatefulSet pod. |
| 207 | +10. Copy the backup files from S3. |
| 208 | + ```shell |
| 209 | + aws s3 cp s3://<bucket>/backup/eventstore/20190830/chaser.chk /var/lib/eventstore/truncate.chk |
| 210 | + aws s3 cp --recursive --exclude="truncate.chk" s3://<bucket>/backup/eventstore/20190830 /var/lib/eventstore |
| 211 | + ``` |
| 212 | + > ref: https://eventstore.org/docs/server/database-backup/#restoring-a-database |
| 213 | +11. Restart the StatefulSet pods. |
| 214 | + ```shell |
| 215 | + kubectl delete $(kubectl get pod -o name -l app.kubernetes.io/component=database,app.kubernetes.io/instance=eventstore) |
| 216 | + ``` |
| 217 | +12. Check the logs to ensure Event Store is processing the chunks. |
| 218 | + ``` |
| 219 | + kubectl logs -f eventstore-0 |
| 220 | + ... |
| 221 | + [00001,12,11:16:47.264] CACHED TFChunk #1-1 (chunk-000001.000000) in 00:00:00.0000495. |
| 222 | + [00001,12,11:17:07.681] Completing data chunk 1-1... |
| 223 | + ... |
| 224 | + ``` |
| 225 | +
|
| 226 | +## Additional Resources |
| 227 | +
|
| 228 | +- [Event Store Docs](https://eventstore.org/docs/) |
| 229 | +- [Event Store Parameters](https://eventstore.org/docs/server/command-line-arguments/index.html#parameter-list) |
| 230 | +- [Event Store Docker Container](https://github.com/EventStore/eventstore-docker) |
| 231 | +- [Chart Template Guide](https://github.com/helm/helm/tree/master/docs/chart_template_guide) |
0 commit comments