Some experiences with weave in Docker Swarm and Kubernetes #2841
Description
I installed weave in kubernetes via the CNI plugin and in docker swarm integrated mode via the V2 docker plugin bboreham/weave2
In both setups I experienced problems with installing weave correctly. Moreover the Weave docker plugin exhibits a lower performance in comparison to other networks when running a YCSB benchmark workload on a single mongo service.
Finally, Nodeports in docker swarm are always routed via the default ingress network of type overlay, regardless of the network chosen by the application deployer
I hope the following findings and conclusions are helpful to improving your ongoing work.
1) Kubernetes deployment problems
I first setup kube cluster on ubuntu xenial VMs using the kubeadm tool and using the flannel CNI plugin.
Thereafter I removed the flannel CNI plugin and installed the weave daemonsets using its CNI Yaml file. This deployment was performed without errors. However, opening a connection to a mongo service via the clusterIP address of the mongo-service did not work. iptables --list
showed that rules for flannel were still active. I rebooted each VM and then connection to the mongo service via Weave worked correctly
2) Docker Swarm deployment problems.
I set docker swarm cluster by installing docker-engine version 17.03.0-ce, build 60ccb22. I installed weave plugin as follows:
s/kube-mongo/experiment-1-bis/swarm$ more deploy_weave.sh
# $1 = number of hosts, $2 = host1, $3 = host2, ...
# first at local host install weave and docker plugin
# then for each remote host install weave and docker plugin
# then invoke weave connect with host1 host2 ...
I was able to ping to a mongo-service.
However, as similar to Kubernetes set up, I was not able to connect to a remote mongo service using the cluster IP address from another mongo container, (e.g. mongo -host did not work.
The key to solving the above problem is to run weave expose
at each node.
3) Performance comparison
3.1) I compared four mongo-service experiments
a) Mongo-service deployed in above kube-adm setup without a node port
b) Mongo-service deployed in above kube-adm setup with a nodeport
c) Mongo-service deployed in above docker swarm setup
docker service create --network weave --constraint node.hostname==docker-swarm
-worker --name mongo-service --mount target=/data/db,source=mongodb,type=volume,
volume-driver=local -p 27017 mongo:3.2.12 --logappend --logpath /var/log/mongodb
/mongod.log --bind_ip 0.0.0.0
c.1) Connect to cluster-IP address
c.1). Connect to nodeport (30000)
d) Mongo service deployed in docker swarm as in setup (c) but connected to a default overlay network
d.1) Connect to cluster IP address
d.2). Connect to nodeport (30000)
3,2 Experiment:
I ran the following simple YCSB workload from https://github.com/brianfrankcooper/YCSB
./bin/ycsb load mongodb -P workloads/workloada -threads 1 -p recordcount=100000 -p operationcount=1000 -p mongodb.url=mongodb://172.17.13.144:31629/ycsb
All setups were deployed on three openstack VMs in an openstack private cloud. All three openstack VMS are deployed on the same set of three phyiscal machines, connected by a high speed network. Each VM has 2 virtual cpus that are pinned to exclusive physical cores of the same socket, hyperthreading is enabled. Each VM has 4GB RAM
To ensure uniformity, nodeSelectors in Kubernetes and constraints in Docker Swarm are used to ensure that all setups are executed on the exact same node topology:
- Node 1: Master/Manager
- Node2: Worker with mongo:3.2.12 container running
- Node3: Worker with decomads/ycsb container. The command to deploy the ycsb container is:
docker service create --constraint node.hostname==docker-swarm-worker-near-ycsb
--network weave --name ycsb decomads/ycsb start.sh
3.3) Results for average, 95th and 99th percentile (ms):
average 95th 99th
Setup a: 598 1012 1580
Setup b: 734 1062 1648
Setup c.1: 1220 1562 2205
Setup c.2: 673 932 1416
Setup d.1: 634 936 1348
Setup d.2 697 914 1370
3.4) Findings
c.1 (docker swarm + weave, mongo-service invoked via a cluster IP) shows deteriorated performance.
All other setups are consistent.
3.5) Threats to validity
- all tests were run only 2 or 3 times until consistent results were observed.
- only a single workload type of YCSB (workload A) was used. The volume of the workload was small
- all tests were conducted on an openstack private cloud. Specific interferences with openstack neutron service and weave could also be the reason for the discrepancies of setup c.1
4) Conclusion
Docker weave plugin has a lower performance than Kube weave plugin for cluster IP connections
Nodeports in setup with Docker Swarm and Weave are connected to the default ingress network of type overlay. This is confirrmed by the following obsservation: when I inspected the mongo-service in setup c it showed that the mongo-service has 2 endpoints, one connected to ingress and one connected to the weave network