Skip to content

Commit 629c1d7

Browse files
Add docker-compose setup
1 parent d6a76f2 commit 629c1d7

15 files changed

+1726
-256
lines changed

.env

Lines changed: 0 additions & 13 deletions
This file was deleted.

README.md

Lines changed: 5 additions & 71 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ An easy to use exporter which will export Zeebe records to a configured Kafka to
1818
- [Zeebe Kafka Exporter](#zeebe-kafka-exporter)
1919
* [Supported Zeebe versions](#supported-zeebe-versions)
2020
* [Backwards compatibility](#backwards-compatibility)
21+
* [Docker](#docker)
2122
* [Quick start](#quick-start)
2223
* [Usage](#usage)
2324
+ [Kafka configuration](#kafka-configuration)
@@ -33,34 +34,19 @@ An easy to use exporter which will export Zeebe records to a configured Kafka to
3334
* [Build from source](#build-from-source)
3435
+ [Prerequisites](#prerequisites)
3536
+ [Building](#building)
36-
* [Backwards compatibility](#backwards-compatibility-1)
3737
* [Report issues or contact developers](#report-issues-or-contact-developers)
3838
* [Create a Pull Request](#create-a-pull-request)
3939
* [Commit Message Guidelines](#commit-message-guidelines)
4040
* [Contributor License Agreement](#contributor-license-agreement)
4141

4242
## Supported Zeebe versions
4343

44-
Version 1.x and 2.x is compatible with the following Zeebe versions:
44+
Each exporter uses the Zeebe protocol according to Zeebe of the same version.
4545

46-
- 0.23.x
47-
- 0.24.x
48-
- 0.25.x
49-
- 0.26.x
50-
51-
Version 3.x is compatible with the following Zeebe versions:
52-
53-
- 1.0
54-
55-
## Backwards compatibility
56-
57-
As there is currently only a single maintainer, only the latest major version will be maintained and
58-
supported.
59-
60-
At the moment, the only guarantees for backwards compatibility are:
46+
### Docker
6147

62-
- the exporter's configuration
63-
- the serde module
48+
A ready to use Docker compose setup is provided for local development.
49+
Checkout [./docker-compose-8.6-kafka](./docker-compose-8.6-kafka/README.md) for details.
6450

6551
## Quick start
6652

@@ -274,58 +260,6 @@ public final class MyClass {
274260
}
275261
```
276262

277-
### Docker
278-
279-
The [docker-compose.yml](/docker-compose.yml) found in the root of the project is a good example of
280-
how you can deploy Zeebe, Kafka, and connect them via the exporter.
281-
282-
To run it, first build the correct exporter artifact which `docker-compose` can find. From the root
283-
of the project, run:
284-
285-
```shell
286-
mvn install -DskipTests -Dexporter.finalName=zeebe-kafka-exporter
287-
```
288-
289-
> It's important here to note that we set the artifact's final name - this allows us to use a fixed
290-
> name in the `docker-compose.yml` in order to mount the file to the Zeebe container.
291-
292-
Then you start the services - they can be started in parallel with no worries.
293-
294-
```shell
295-
docker-compose up -d
296-
```
297-
298-
> If you wish to stop these containers, remember that some of them create volumes, so unless you
299-
> plan on reusing those make sure to bring everything down using `docker-compose down -v`.
300-
301-
The services started are the following:
302-
303-
- zeebe: with the gateway port (26500) opened
304-
- kafka: with the standard 9092 port opened for internal communication, and port 29092 for external
305-
- consumer: a simple kafkacat image which will print out every record published on any topic
306-
starting with `zeebe`
307-
- zookeeper: required to start Kafka
308-
309-
Once everything is up and running, use your Zeebe cluster as you normally would. For example, given
310-
a workflow at `~/workflow.bpmn`, you could deploy it as:
311-
312-
```shell
313-
zbctl --insecure deploy ~/workflow.bpmn
314-
```
315-
316-
After this, you can see the messages being consumed by the consumer running:
317-
318-
```shell
319-
docker logs -f consumer
320-
```
321-
322-
> You may see some initial error logs from the consumer - this happens while the Kafka broker isn't
323-
> fully up, but it should stop once kafkacat can connect to it.
324-
325-
> The first time a record of a certain kind (e.g. deployment, job, workflow, etc.) is published, it
326-
> will create a new topic for it. The consumer is refreshing the list of topics every second, which
327-
> means that for that first message there may be a bit of delay.
328-
329263
## Reference
330264

331265
The exporter uses a Kafka producer to push records out to different topics based on the incoming

docker-compose-8.6-kafka/.env

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
## Image versions ##
2+
# renovate: datasource=docker depName=camunda/connectors-bundle
3+
CAMUNDA_CONNECTORS_VERSION=8.6.8
4+
# renovate: datasource=docker depName=camunda/zeebe
5+
CAMUNDA_ZEEBE_VERSION=8.6.9
6+
# renovate: datasource=docker depName=camunda/identity
7+
CAMUNDA_IDENTITY_VERSION=8.6.8
8+
# renovate: datasource=docker depName=camunda/operate
9+
CAMUNDA_OPERATE_VERSION=8.6.9
10+
# renovate: datasource=docker depName=camunda/tasklist
11+
CAMUNDA_TASKLIST_VERSION=8.6.9
12+
# renovate: datasource=docker depName=camunda/optimize
13+
CAMUNDA_OPTIMIZE_VERSION=8.6.5
14+
# renovate: datasource=docker depName=camunda/web-modeler-restapi
15+
CAMUNDA_WEB_MODELER_VERSION=8.6.7
16+
# renovate: datasource=docker depName=elasticsearch
17+
ELASTIC_VERSION=8.15.5
18+
KEYCLOAK_SERVER_VERSION=24.0.5
19+
# renovate: datasource=docker depName=axllent/mailpit
20+
MAILPIT_VERSION=v1.20.7
21+
POSTGRES_VERSION=14.5-alpine
22+
HOST=localhost
23+
KEYCLOAK_HOST=localhost
24+
25+
## Configuration ##
26+
# By default the zeebe api is public, when setting this to `identity` a valid zeebe client token is required
27+
ZEEBE_AUTHENTICATION_MODE=identity
28+
ZEEBE_CLIENT_ID=zeebe
29+
ZEEBE_CLIENT_SECRET=zecret
30+
31+
# Set to 'true' to enable resource based authorizations for users and groups
32+
# This can be used to limit access for users or groups to view/update specific
33+
# processes and decisions in Operate and Tasklist
34+
RESOURCE_AUTHORIZATIONS_ENABLED=false
35+
36+
# Set to 'true' to enable multi-tenancy across all components
37+
# This requires use of identity for authentication
38+
#
39+
# ZEEBE_AUTHENTICATION_MODE=identity
40+
#
41+
MULTI_TENANCY_ENABLED=true
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
es:
2+
settings:
3+
index:
4+
number_of_replicas: 0
5+
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
CAMUNDA_MODELER_CLUSTERS_0_AUTHENTICATION: oauth
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
CAMUNDA_MODELER_CLUSTERS_0_AUTHENTICATION: none

docker-compose-8.6-kafka/README.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Camunda 8 Self-Managed - Docker Compose
2+
3+
## Usage
4+
5+
For end user usage, please check the offical documentation of [Camunda 8 Self-Managed Docker Compose](https://docs.camunda.io/docs/8.6/self-managed/setup/deploy/local/docker-compose/).
6+
7+
## Changes
8+
9+
In this copy of Camunda platform 8.6.9 these changes have been applied:
10+
11+
1. All containers restart policy is set to `no`.
12+
1. Tenants enabled.
13+
1. Kafka 3.7.2 added, configured to listen on localhost:9092.
14+
1. Kafka exporter added to Zeebe.<br>
15+
For topic configuration see [exporter.yml](./exporters/exporter.yml).
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
# add secrets per line in the format NAME=VALUE
2+
# WARNING: ensure not to commit changes to this file
Lines changed: 167 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,167 @@
1+
# While the Docker images themselves are supported for production usage,
2+
# this docker-compose.yaml is designed to be used by developers to run
3+
# an environment locally. It is not designed to be used in production.
4+
# We recommend to use Kubernetes in production with our Helm Charts:
5+
# https://docs.camunda.io/docs/self-managed/platform-deployment/kubernetes-helm/
6+
# For local development, we recommend using KIND instead of `docker-compose`:
7+
# https://docs.camunda.io/docs/self-managed/platform-deployment/helm-kubernetes/guides/local-kubernetes-cluster/
8+
9+
# This is a lightweight configuration with Zeebe, Operate, Tasklist, and Elasticsearch
10+
# See docker-compose.yml for a configuration that also includes Optimize, Identity, and Keycloak.
11+
12+
services:
13+
14+
zeebe: # https://docs.camunda.io/docs/self-managed/platform-deployment/docker/#zeebe
15+
image: camunda/zeebe:${CAMUNDA_ZEEBE_VERSION}
16+
container_name: zeebe
17+
ports:
18+
- "26500:26500"
19+
- "9600:9600"
20+
- "8088:8080"
21+
environment: # https://docs.camunda.io/docs/self-managed/zeebe-deployment/configuration/environment-variables/
22+
- ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_CLASSNAME=io.camunda.zeebe.exporter.ElasticsearchExporter
23+
- ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_ARGS_URL=http://elasticsearch:9200
24+
# default is 1000, see here: https://github.com/camunda/zeebe/blob/main/exporters/elasticsearch-exporter/src/main/java/io/camunda/zeebe/exporter/ElasticsearchExporterConfiguration.java#L259
25+
- ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_ARGS_BULK_SIZE=1
26+
# allow running with low disk space
27+
- ZEEBE_BROKER_DATA_DISKUSAGECOMMANDWATERMARK=0.998
28+
- ZEEBE_BROKER_DATA_DISKUSAGEREPLICATIONWATERMARK=0.999
29+
- "JAVA_TOOL_OPTIONS=-Xms512m -Xmx512m"
30+
restart: unless-stopped
31+
healthcheck:
32+
test: [ "CMD-SHELL", "timeout 10s bash -c ':> /dev/tcp/127.0.0.1/9600' || exit 1" ]
33+
interval: 30s
34+
timeout: 5s
35+
retries: 5
36+
start_period: 30s
37+
volumes:
38+
- zeebe:/usr/local/zeebe/data
39+
networks:
40+
- camunda-platform
41+
depends_on:
42+
- elasticsearch
43+
44+
operate: # https://docs.camunda.io/docs/self-managed/platform-deployment/docker/#operate
45+
image: camunda/operate:${CAMUNDA_OPERATE_VERSION}
46+
container_name: operate
47+
ports:
48+
- "8081:8080"
49+
environment: # https://docs.camunda.io/docs/self-managed/operate-deployment/configuration/
50+
- CAMUNDA_OPERATE_ZEEBE_GATEWAYADDRESS=zeebe:26500
51+
- CAMUNDA_OPERATE_ELASTICSEARCH_URL=http://elasticsearch:9200
52+
- CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_URL=http://elasticsearch:9200
53+
- CAMUNDA_OPERATE_CSRFPREVENTIONENABLED=false
54+
- management.endpoints.web.exposure.include=health
55+
- management.endpoint.health.probes.enabled=true
56+
healthcheck:
57+
test: [ "CMD-SHELL", "wget -O - -q 'http://localhost:9600/actuator/health/readiness'" ]
58+
interval: 30s
59+
timeout: 1s
60+
retries: 5
61+
start_period: 30s
62+
networks:
63+
- camunda-platform
64+
depends_on:
65+
- zeebe
66+
- elasticsearch
67+
68+
tasklist: # https://docs.camunda.io/docs/self-managed/platform-deployment/docker/#tasklist
69+
image: camunda/tasklist:${CAMUNDA_TASKLIST_VERSION}
70+
container_name: tasklist
71+
ports:
72+
- "8082:8080"
73+
environment: # https://docs.camunda.io/docs/self-managed/tasklist-deployment/configuration/
74+
- CAMUNDA_TASKLIST_ZEEBE_GATEWAYADDRESS=zeebe:26500
75+
- CAMUNDA_TASKLIST_ZEEBE_RESTADDRESS=http://zeebe:8080
76+
- CAMUNDA_TASKLIST_ELASTICSEARCH_URL=http://elasticsearch:9200
77+
- CAMUNDA_TASKLIST_ZEEBEELASTICSEARCH_URL=http://elasticsearch:9200
78+
- CAMUNDA_TASKLIST_CSRFPREVENTIONENABLED=false
79+
- management.endpoints.web.exposure.include=health
80+
- management.endpoint.health.probes.enabled=true
81+
healthcheck:
82+
test: [ "CMD-SHELL", "wget -O - -q 'http://localhost:9600/actuator/health/readiness'" ]
83+
interval: 30s
84+
timeout: 1s
85+
retries: 5
86+
start_period: 30s
87+
networks:
88+
- camunda-platform
89+
depends_on:
90+
- zeebe
91+
- elasticsearch
92+
93+
connectors: # https://docs.camunda.io/docs/components/integration-framework/connectors/out-of-the-box-connectors/available-connectors-overview/
94+
image: camunda/connectors-bundle:${CAMUNDA_CONNECTORS_VERSION}
95+
container_name: connectors
96+
ports:
97+
- "8085:8080"
98+
environment:
99+
- ZEEBE_CLIENT_BROKER_GATEWAY-ADDRESS=zeebe:26500
100+
- ZEEBE_CLIENT_SECURITY_PLAINTEXT=true
101+
- CAMUNDA_OPERATE_CLIENT_URL=http://operate:8080
102+
- CAMUNDA_OPERATE_CLIENT_USERNAME=demo
103+
- CAMUNDA_OPERATE_CLIENT_PASSWORD=demo
104+
- management.endpoints.web.exposure.include=health
105+
- management.endpoint.health.probes.enabled=true
106+
healthcheck:
107+
test: [ "CMD-SHELL", "curl -f http://localhost:8080/actuator/health/readiness" ]
108+
interval: 30s
109+
timeout: 1s
110+
retries: 5
111+
start_period: 30s
112+
env_file: connector-secrets.txt
113+
networks:
114+
- camunda-platform
115+
depends_on:
116+
- zeebe
117+
- operate
118+
119+
elasticsearch: # https://hub.docker.com/_/elasticsearch
120+
image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
121+
container_name: elasticsearch
122+
ports:
123+
- "9200:9200"
124+
- "9300:9300"
125+
environment:
126+
- bootstrap.memory_lock=true
127+
- discovery.type=single-node
128+
- xpack.security.enabled=false
129+
# allow running with low disk space
130+
- cluster.routing.allocation.disk.threshold_enabled=false
131+
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
132+
ulimits:
133+
memlock:
134+
soft: -1
135+
hard: -1
136+
restart: unless-stopped
137+
healthcheck:
138+
test: [ "CMD-SHELL", "curl -f http://localhost:9200/_cat/health | grep -q green" ]
139+
interval: 30s
140+
timeout: 5s
141+
retries: 3
142+
volumes:
143+
- elastic:/usr/share/elasticsearch/data
144+
networks:
145+
- camunda-platform
146+
147+
kibana:
148+
image: docker.elastic.co/kibana/kibana:${ELASTIC_VERSION}
149+
container_name: kibana
150+
ports:
151+
- 5601:5601
152+
volumes:
153+
- kibana:/usr/share/kibana/data
154+
networks:
155+
- camunda-platform
156+
depends_on:
157+
- elasticsearch
158+
profiles:
159+
- kibana
160+
161+
volumes:
162+
zeebe:
163+
elastic:
164+
kibana:
165+
166+
networks:
167+
camunda-platform:

0 commit comments

Comments
 (0)