This software is pre-production and should not be deployed to production servers.
Table of Contents
This folder contains:
- dockerfiles to build workloads images,
- aurora job definitions,
- ansible playbook run_workloads.yaml to schedule all workloads on Aurora cluster.
Wrapper module provides a framework for parsing application output and sending metrics in prometheus format to a Kafka broker.
tox -e wrapper_packageWrapper executables are as dist/wrapper_*.pex files.
Note that:
- most workload images require wrapper pex files (Building wrappers),
- workload images are build from repository top-level (to have access to wrapper pex files),
- the convention is to prefix images names with string
owca/, e.g.owca/cassadra_stress, - sub directorires here e.g.
cassandra_stressmatch image names.
To build image:
# in repository top-level directory
docker build -f workloads/stress_ng/Dockerfile -t owca/stress_ng .Aurora files (*.aurora) contains definition of Aurora jobs.
Each aurora file includes common.aurora that defines
common required parameters.
The list of common parameters is:
A workload specific variables are documented in the workload aurora files.
Use run_workloads.yaml playbook to run workloads on Aurora cluster.
Playbook requires Aurora client being installed on ansible host machine (please follow official instructions to install and
configure the client properly).
run_workloads.yaml playbook requires an inventory based on run_workloads_inventory.template.yaml. The template constitute an example how to configure a composition of workloads.
To run a workload instance on a specific cluster node we use aurora constraints mechanism.
In our solution this requires to mark Mesos nodes with an attribute named own_ip.
Then to assign a job to a specific node the value of the parameter own_ip needs to match
the value of a mesos attribute set on the node.
For more information about aurora constrainst and mesos attributes can be found in
official aurora documentation.
As it was noted, the reference for creating an inventory is a file run_workloads_inventory.template.yaml. The template file contains comments aimed at helping to understand the structure.
Below resource allocation definition for a workload. It will be applied to all hosts.
application_hosts:
hosts:
# ....
vars:
# ....
workloads:
cassandra_ycsb: # workload_name
default: # workload_version_name
cassandra: # job_id
resources:
cpu: 8
disk: 4
ycsb: # job_id
resources:
cpu: 1.5We can overwrite set values for a choosen host (we also need to set hash_behaviour to merge, please refer to
doc).
To achieve this we create dictionary workloads under the choosen host:
application_hosts:
hosts:
10.10.10.9.4:
env_uniq_id: 4
workloads: # overwriting for a choosen host
default:
cassandra_ycsb: #
resources: #
cpu: 4 #
vars:
# ....
workloads:
cassandra_ycsb: # workload_name
default:
cassandra: # job_id
resources:
cpu: 8
disk: 4
ycsb:
resources:
cpu: 1.5Below we include an example configuration of a workload with comments marking values which translates into common.aurora parameteres:
docker_registry: 10.10.10.99:80
# other params goes here ...
workloads:
cassandra_ycsb: # workload_name
default: # workload_version_name
count: 2 # two instances of the same workload
slo: 2500 # slo
communication_port: 3333 # communication_port
cassandra:
image_name: cassandra # image_name
image_tag: 3.11.3 # image_tag
resources:
cpu: 8 # cpu
disk: 4 # disk
ycsb:
count: 2 # two load generators stress the same cassandra instance
env: # any value passed here will be passed directly to aurora job (using environment variables)
ycsb_target: 2000 # check ycsb.aurora file for description of available parameters
ycsb_thread_count: 8
resources:
cpu: 1.5 # cpu
big: # workload_version_name
...The rule of building aurora job_key (string identifying an aurora job, required argument in command aurora job create) is:
{{cluster}}/{{role}}/staging{{env_uniq_id}}/{{workload_name}}.{{workload_version_name}}--{{job_id}}--{{job_uniq_id}}.{{job_replica_index}}.
The shell commands which will be executed by ansible as a result are as follow:
# first instance of the workload
# two replicas of load generators
aurora job create example/root/staging127/cassandra_ycsb.default--ycsb--3333.0
aurora job create example/root/staging127/cassandra_ycsb.default--ycsb--3333.1
aurora job create example/root/staging127/cassandra_ycsb.default--cassandra--3333.0
# second instance of the workload
# two replicas of load generators
aurora job create example/root/staging127/cassandra_ycsb.default--ycsb--3334.0
aurora job create example/root/staging127/cassandra_ycsb.default--ycsb--3334.1
aurora job create example/root/staging127/cassandra_ycsb.default--cassandra--3334.0
# Here will goes commands for 'big' workload version
aurora job create example/root/staging127/cassandra_ycsb.big--ycsb--3333.0
# ...