Skip to content

Commit 430763d

Browse files
committed
Merge branch 'release/4.0.0'
2 parents 60a8898 + 6ca1be4 commit 430763d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+395
-60
lines changed

CHANGELOG.md

+8-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,10 @@
11
# AlviStack-Ansible
22

3-
## 0.0.1 - TBC
3+
## 4.1.0 - TBC
4+
5+
### Major Changes
6+
7+
## 4.0.0 - 2019-11-05
8+
9+
- Initial release for Ansible 2.9 or higher
10+
- Support both Ubuntu 16.04/18.04 or RHEL/CentOS 7 or openSUSE Leap 15.1

README.md

+131-2
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,140 @@
66

77
Ansible playbooks for deploying AlviStack.
88

9+
AlviStack-Ansible provides Ansible playbooks and roles for the deployment and configuration of an [Kubernetes](https://github.com/kubernetes/kubernetes) environment.
10+
911
## Requirements
1012

11-
This playbook require Ansible 2.8 or higher.
13+
This playbook require Ansible 2.9 or higher.
14+
15+
This playbook was designed for Ubuntu 16.04/18.04 or RHEL/CentOS 7 or openSUSE Leap 15.1.
16+
17+
## Quick Start
18+
19+
### Bootstrap Ansible and Roles
20+
21+
Start by cloning the Alvistack-Ansible repository, checkout the corresponding branch, and init with `git submodule`, then bootstrap Python3 + Ansible with provided helper script:
22+
23+
# GIT clone the development branch
24+
git clone --branch develop https://github.com/alvistack/alvistack-ansible
25+
cd alvistack-ansible
26+
27+
# Setup Roles with GIT submodule
28+
git submodule init
29+
git submodule sync
30+
git submodule update
31+
32+
# Bootstrap Ansible
33+
./scripts/bootstrap-ansible.sh
34+
35+
# Confirm the version of Python3, PIP3 and Ansible
36+
python3 --version
37+
pip3 --version
38+
ansible --version
39+
40+
### AIO
41+
42+
All-in-one (AIO) build is a great way to perform an Kubernetes build for:
43+
44+
- A development environment
45+
- An overview of how all the Kubernetes services fit together
46+
- A simple lab deployment
47+
48+
This deployment will setup the follow components:
49+
50+
- [Flannel](https://github.com/coreos/flannel)
51+
- [Dashboard](https://github.com/kubernetes/dashboard)
52+
- [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx)
53+
- [cert-manager](https://github.com/jetstack/cert-manager)
54+
- [Local Path Provisioner](https://github.com/rancher/local-path-provisioner)
55+
56+
Simply run the playbooks with sample AIO inventory:
57+
58+
# Run playbooks
59+
ansible-playbook -i inventory/aio/hosts playbooks/setup-aio.yml
60+
61+
# Confirm the version and status of Kubernetes
62+
kubectl version
63+
kubectl get node
64+
kubectl get pod --all-namespaces
65+
66+
### Production
67+
68+
For production environment we should backed with [Ceph File System](https://docs.ceph.com/docs/master/cephfs/) for [Kubernetes Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
69+
70+
Moreover, using [Weave Net](https://github.com/weaveworks/weave) as network plugin so we could support [Kubernetes Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
71+
72+
Finally, in order to avoid [Single Point of Failure](https://en.wikipedia.org/wiki/Single_point_of_failure), at least 3 instances for CephFS and 3 instances for Kubernetes is recommended.
73+
74+
This deployment will setup the follow components:
75+
76+
- [Ceph](https://ceph.io/)
77+
- [Weave Net](https://github.com/weaveworks/weave)
78+
- [Dashboard](https://github.com/kubernetes/dashboard)
79+
- [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx)
80+
- [cert-manager](https://github.com/jetstack/cert-manager)
81+
- [CephFS Volume Provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/cephfs)
82+
83+
Start by copying the sample inventory for customization:
84+
85+
# Copy sample inventory
86+
cp -rfp inventory/sample inventory/myinventory
87+
88+
Once update `inventory/myinventory/hosts` as per your production environment, now run the playbooks:
89+
90+
# Run playbooks
91+
ansible-playbook -i inventory/myinventory/hosts playbooks/setup-everything.yml
92+
ansible-playbook -i inventory/myinventory/hosts playbooks/setup-ansible.yml
93+
94+
# Confirm the version and status of Ceph
95+
ceph --version
96+
ceph --status
97+
ceph health detail
98+
99+
# Confirm the version and status of Kubernetes
100+
kubectl version
101+
kubectl get node
102+
kubectl get pod --all-namespaces
103+
104+
Moreover, we don't setup the Ceph OSD and Ceph MDS for you, which you should set it up once manually according to your production environment, e.g.:
105+
106+
# Initialize individual OSDs
107+
ceph-volume lvm create --bluestore --data /dev/sdb
108+
ceph-volume lvm create --bluestore --data /dev/sdc
109+
ceph-volume lvm create --bluestore --data /dev/sdd
110+
ceph-volume lvm activate --all
111+
112+
# Create OSD pool for RBD
113+
ceph osd pool create rbd 8 8
114+
ceph osd pool set rbd size 3
115+
ceph osd pool set rbd min_size 2
116+
117+
# Create OSD pool for CephFS Metadata
118+
ceph osd pool create cephfs_metadata 32 32
119+
ceph osd pool set cephfs_metadata size 3
120+
ceph osd pool set cephfs_metadata min_size 2
121+
122+
# Create OSD pool for CephFS data
123+
ceph osd pool create cephfs_data 128 128
124+
ceph osd pool set cephfs_data size 3
125+
ceph osd pool set cephfs_data min_size 2
126+
127+
# Create CephFS
128+
ceph fs new cephfs cephfs_metadata cephfs_data
129+
ceph fs set cephfs standby_count_wanted 0
130+
ceph fs set cephfs max_mds 1
131+
132+
### Molecule
133+
134+
You could also run our [Molecule](https://molecule.readthedocs.io/en/stable/) test cases if you have [LXD](https://lxd.readthedocs.io/en/latest/) or [Vagrant](https://www.vagrantup.com/) installed, e.g.
135+
136+
# Run Molecule on Ubuntu 18.04 with LXD
137+
molecule converge -s ubuntu-18.04
138+
139+
# Run Molecule on Ubuntu 18.04 with Vagrant
140+
molecule converge -s ubuntu-18.04-vagrant
12141

13-
This playbook was designed for Ubuntu 16.04/18.04 or RHEL/CentOS 7 or openSUSE Leap 15.0.
142+
Please refer to [.travis.yml](.travis.yml) for more information on running Molecule and LXD.
14143

15144
## License
16145

inventory/aio/group_vars/all/all.yml

+95
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
---
2+
3+
# ceph-common
4+
ceph_fsid: "{{ hostvars[groups['ceph-mon'][0]].ansible_machine_id | to_uuid }}"
5+
ceph_mon_initial_members: >-
6+
{%- for host in groups['ceph-mon'] -%}
7+
{{ host }},
8+
{%- endfor -%}
9+
ceph_mon_host: >-
10+
{%- for host in groups['ceph-mon'] -%}
11+
{{ hostvars[host].ansible_default_ipv4.address }},
12+
{%- endfor -%}
13+
ceph_public_network: "{{ (ansible_default_ipv4.network + '/' + ansible_default_ipv4.netmask) | ipaddr('net') }}"
14+
ceph_cluster_network: "{{ (ansible_default_ipv4.network + '/' + ansible_default_ipv4.netmask) | ipaddr('net') }}"
15+
16+
# etcd
17+
etcd_cert_file: "/etc/kubernetes/pki/etcd/server.crt"
18+
etcd_key_file: "/etc/kubernetes/pki/etcd/server.key"
19+
etcd_trusted_ca_file: "/etc/kubernetes/pki/etcd/ca.crt"
20+
etcd_peer_cert_file: "/etc/kubernetes/pki/etcd/peer.crt"
21+
etcd_peer_key_file: "/etc/kubernetes/pki/etcd/peer.key"
22+
etcd_peer_trusted_ca_file: "/etc/kubernetes/pki/etcd/ca.crt"
23+
etcd_listen_peer_urls: "https://{{ ansible_default_ipv4.address }}:2380,https://127.0.0.1:2380"
24+
etcd_listen_client_urls: "https://{{ ansible_default_ipv4.address }}:2379,https://127.0.0.1:2379"
25+
etcd_initial_advertise_peer_urls: "https://{{ ansible_default_ipv4.address }}:2380"
26+
etcd_initial_cluster: >-
27+
{%- for host in ansible_play_hosts -%}
28+
{{ host }}=https://{{ hostvars[host].ansible_default_ipv4.address }}:2380,
29+
{%- endfor -%}
30+
etcd_advertise_client_urls: "https://{{ ansible_default_ipv4.address }}:2379"
31+
etcd_csr_subject_alt_name: >-
32+
{%- for host in ansible_play_hosts -%}
33+
DNS:{{ host }},
34+
{%- endfor -%}
35+
DNS:localhost,
36+
{%- for host in ansible_play_hosts -%}
37+
IP:{{ hostvars[host].ansible_default_ipv4.address }},
38+
{%- endfor -%}
39+
IP:127.0.0.1
40+
etcd_peer_csr_subject_alt_name: >-
41+
{%- for host in ansible_play_hosts -%}
42+
DNS:{{ host }},
43+
{%- endfor -%}
44+
DNS:localhost,
45+
{%- for host in ansible_play_hosts -%}
46+
IP:{{ hostvars[host].ansible_default_ipv4.address }},
47+
{%- endfor -%}
48+
IP:127.0.0.1
49+
50+
# kubernetes
51+
kubernetes_cluster_name: "{{ hostvars[groups['kube-master'][0]].ansible_machine_id | to_uuid }}"
52+
kubernetes_etcd_external_endpoints: >-
53+
{%- for host in ansible_play_hosts -%}
54+
https://{{ hostvars[host].ansible_default_ipv4.address }}:2379,
55+
{%- endfor -%}
56+
kubelet_node_ip: "{{ ansible_default_ipv4.address }}"
57+
kube_apiserver_advertise_address: "{{ ansible_default_ipv4.address }}"
58+
kube_apiserver_bind_port: "6443"
59+
kube_apiserver_endpoint: "{{ hostvars[groups['kube-master'][0]].ansible_default_ipv4.address }}:{{ kube_apiserver_bind_port }}"
60+
kube_apiserver_csr_subject_alt_name: >-
61+
{%- for host in ansible_play_hosts -%}
62+
DNS:{{ host }},
63+
{%- endfor -%}
64+
DNS:localhost,
65+
{%- for host in ansible_play_hosts -%}
66+
IP:{{ hostvars[host].ansible_default_ipv4.address }},
67+
{%- endfor -%}
68+
IP:127.0.0.1
69+
kube_proxy_conntrack:
70+
maxPerCore: 32768
71+
min: 131072
72+
73+
# kubernetes-cephfs-provisioner
74+
cephfs_provisioner_monitors: >-
75+
{%- for host in groups['ceph-mon'] -%}
76+
{{ hostvars[host].ansible_default_ipv4.address }}:6789,
77+
{%- endfor -%}
78+
79+
# kubernetes-rbd-provisioner
80+
rbd_provisioner_monitors: >-
81+
{%- for host in groups['ceph-mon'] -%}
82+
{{ hostvars[host].ansible_default_ipv4.address }}:6789,
83+
{%- endfor -%}
84+
85+
# kubernetes-csi-cephfs
86+
csi_cephfs_monitors: >-
87+
{%- for host in groups['ceph-mon'] -%}
88+
{{ hostvars[host].ansible_default_ipv4.address }}:6789,
89+
{%- endfor -%}
90+
91+
# kubernetes-csi-rbd
92+
csi_rbd_monitors: >-
93+
{%- for host in groups['ceph-mon'] -%}
94+
{{ hostvars[host].ansible_default_ipv4.address }}:6789,
95+
{%- endfor -%}

inventory/aio/host_vars/localhost.yml

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
---

inventory/aio/hosts

+26
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
[all]
2+
localhost ansible_connectio=local
3+
4+
[ansible]
5+
localhost
6+
7+
[ceph-mon]
8+
localhost
9+
10+
[ceph-mgr]
11+
localhost
12+
13+
[ceph-osd]
14+
localhost
15+
16+
[ceph-mds]
17+
localhost
18+
19+
[ceph-rgw]
20+
localhost
21+
22+
[kube-master]
23+
localhost
24+
25+
[kube-node]
26+
localhost

inventory/sample/hosts

+34-8
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,49 @@
1+
[all]
2+
ansible ansible_host=172.24.0.11
3+
ceph11 ansible_host=172.24.1.11
4+
ceph12 ansible_host=172.24.1.12
5+
ceph13 ansible_host=172.24.1.13
6+
kube11 ansible_host=172.24.2.11
7+
kube12 ansible_host=172.24.2.12
8+
kube13 ansible_host=172.24.2.13
9+
kube14 ansible_host=172.24.2.14
10+
kube15 ansible_host=172.24.2.15
11+
kube16 ansible_host=172.24.2.16
12+
113
[ansible]
2-
localhost
14+
ansible
315

416
[ceph-mon]
5-
localhost
17+
ceph11
18+
ceph12
19+
ceph13
620

721
[ceph-mgr]
8-
localhost
22+
ceph11
23+
ceph12
24+
ceph13
925

1026
[ceph-osd]
11-
localhost
27+
ceph11
28+
ceph12
29+
ceph13
1230

1331
[ceph-mds]
14-
localhost
32+
ceph11
33+
ceph12
34+
ceph13
1535

1636
[ceph-rgw]
17-
localhost
37+
ceph11
38+
ceph12
39+
ceph13
1840

1941
[kube-master]
20-
localhost
42+
kube11
43+
kube12
44+
kube13
2145

2246
[kube-node]
23-
localhost
47+
kube14
48+
kube15
49+
kube16

0 commit comments

Comments
 (0)