Skip to content

Commit f24b2f5

Browse files
authored
0.5.1+1.6.3 (#14)
* update Longhorn to v1.6.3 * update .yamllint * fix ansible-lint issues * update Molecule test * fix Molecule verify
1 parent 8170558 commit f24b2f5

26 files changed

+396
-106
lines changed

.yamllint

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,5 +8,12 @@ rules:
88
line-length:
99
max: 300
1010
level: warning
11-
1211
comments-indentation: disable
12+
comments:
13+
min-spaces-from-content: 1
14+
braces:
15+
min-spaces-inside: 0
16+
max-spaces-inside: 1
17+
octal-values:
18+
forbid-implicit-octal: true
19+
forbid-explicit-octal: true

CHANGELOG.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,26 @@ SPDX-License-Identifier: GPL-3.0-or-later
55

66
# Changelog
77

8+
## 0.5.1+1.6.3
9+
10+
- update Longhorn to `v1.6.3`
11+
- update `.yamllint`
12+
- update Molecule test
13+
14+
Further reading:
15+
16+
- [Longhorn v1.6.3 Release Notes](https://github.com/longhorn/longhorn/releases/tag/v1.6.3)
17+
- [Upgrading from v1.6.x (< v1.6.3) or v1.5.x](https://longhorn.io/docs/1.6.3/deploy/upgrade/longhorn-manager/)
18+
- [Node Maintenance and Kubernetes Upgrade Guide](https://longhorn.io/docs/1.6.3/maintenance/maintenance/)
19+
820
## 0.5.0+1.6.1
921

1022
This is a major release update of Longhorn. Please read [Longhorn Important Notes](https://longhorn.io/docs/1.6.1/deploy/important-notes/) before upgrading!
1123

1224
Further reading:
1325

1426
- [Longhorn v1.6.1 Release Notes](https://github.com/longhorn/longhorn/releases/tag/v1.6.1)
27+
- [Upgrading from v1.6.x (< v1.6.3) or v1.5.x](https://longhorn.io/docs/1.6.3/deploy/upgrade/longhorn-manager/)
1528
- [Node Maintenance and Kubernetes Upgrade Guide](https://longhorn.io/docs/1.6.1/maintenance/maintenance/)
1629

1730
Other updates:

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ This Ansible role is used in my blog series [Kubernetes the not so hard way with
99

1010
## Versions
1111

12-
I tag every release and try to stay with [semantic versioning](http://semver.org). If you want to use the role I recommend to checkout the latest tag. The master branch is basically development while the tags mark stable releases. But in general I try to keep master in good shape too. A tag `0.5.0+1.6.1` means this is release `0.5.0` of this role and it contains Longhorn chart version `1.6.1` (which normally is the same as the Longhorn version itself). If the role itself changes `X.Y.Z` before `+` will increase. If the Longhorn chart version changes `X.Y.Z` after `+` will increase too. This allows to tag bugfixes and new major versions of the role while it's still developed for a specific Longhorn release.
12+
I tag every release and try to stay with [semantic versioning](http://semver.org). If you want to use the role I recommend to checkout the latest tag. The master branch is basically development while the tags mark stable releases. But in general I try to keep master in good shape too. A tag `0.5.1+1.6.3` means this is release `0.5.1` of this role and it contains Longhorn chart version `1.6.3` (which normally is the same as the Longhorn version itself). If the role itself changes `X.Y.Z` before `+` will increase. If the Longhorn chart version changes `X.Y.Z` after `+` will increase too. This allows to tag bugfixes and new major versions of the role while it's still developed for a specific Longhorn release.
1313

1414
## Requirements
1515

@@ -37,7 +37,7 @@ See [CHANGELOG.md](https://github.com/githubixx/ansible-role-longhorn-kubernetes
3737

3838
```yaml
3939
# Helm chart version
40-
longhorn_chart_version: "1.6.1"
40+
longhorn_chart_version: "1.6.3"
4141

4242
# Helm release name
4343
longhorn_release_name: "longhorn"
@@ -176,13 +176,13 @@ longhorn_label_nodes: false
176176

177177
### Longhorn documentation
178178

179-
Before you start installing Longhorn you REALLY want to read the [The Longhorn Documentation](https://longhorn.io/docs/1.6.1/)! As data is the most valuable thing you can have you should understand how Longhorn works and don't forget to add backups later ;-). Esp. have a look at the [best practices](https://longhorn.io/docs/1.6.1/best-practices/).
179+
Before you start installing Longhorn you REALLY want to read the [The Longhorn Documentation](https://longhorn.io/docs/1.6.3/)! As data is the most valuable thing you can have you should understand how Longhorn works and don't forget to add backups later ;-). Esp. have a look at the [best practices](https://longhorn.io/docs/1.6.3/best-practices/).
180180

181181
### Helm chart values
182182

183-
That said: The first thing to do is to check `templates/longhorn_values_default.yml.j2`. This file contains the values/settings for the Longhorn Helm chart that are partly default anyways (just to avoid that someone changes the defaults) or different to the default ones which are located [here](https://github.com/longhorn/longhorn/blob/v1.6.1/chart/values.yaml). All settings can be found in the [Settings Reference](https://longhorn.io/docs/1.6.1/references/settings/).
183+
That said: The first thing to do is to check `templates/longhorn_values_default.yml.j2`. This file contains the values/settings for the Longhorn Helm chart that are partly default anyways (just to avoid that someone changes the defaults) or different to the default ones which are located [here](https://github.com/longhorn/longhorn/blob/v1.6.3/chart/values.yaml). All settings can be found in the [Settings Reference](https://longhorn.io/docs/1.6.3/references/settings/).
184184

185-
To use your own values just create a file called `longhorn_values_user.yml.j2` and put it into the `templates` directory. Then this Longhorn role will use that file to render the Helm values. You can use `templates/longhorn_values_default.yml.j2` as a template or just start from scratch. As mentioned above you can modify all settings for the Longhorn Helm chart that are different to the default ones which are located [here](https://github.com/longhorn/longhorn/blob/v1.6.1/chart/values.yaml).
185+
To use your own values just create a file called `longhorn_values_user.yml.j2` and put it into the `templates` directory. Then this Longhorn role will use that file to render the Helm values. You can use `templates/longhorn_values_default.yml.j2` as a template or just start from scratch. As mentioned above you can modify all settings for the Longhorn Helm chart that are different to the default ones which are located [here](https://github.com/longhorn/longhorn/blob/v1.6.3/chart/values.yaml).
186186

187187
### Render and verify deployment manifests
188188

@@ -218,15 +218,15 @@ To check if everything was deployed use the usual `kubectl` commands like `kubec
218218

219219
### Update/upgrade
220220

221-
As Longhorn gets updates/upgrades every few weeks/months the role also can do upgrades. For updates/upgrades (esp. major upgrades) have a look at `tasks/upgrade.yml` to see what's happening before, during and after the update. In general this Ansible role does what's described in [Upgrade with Helm](https://longhorn.io/docs/1.6.1/deploy/upgrade/longhorn-manager/#upgrade-with-helm) in the [Upgrading Longhorn Manager](https://longhorn.io/docs/1.6.1/deploy/upgrade/longhorn-manager/) documentation. So in this step only the Longhorn Manager gets updated. After Ansible applied the update/upgrade wait till all Longhorn Pods are ready again.
221+
As Longhorn gets updates/upgrades every few weeks/months the role also can do upgrades. For updates/upgrades (esp. major upgrades) have a look at `tasks/upgrade.yml` to see what's happening before, during and after the update. In general this Ansible role does what's described in [Upgrade with Helm](https://longhorn.io/docs/1.6.3/deploy/upgrade/longhorn-manager/#upgrade-with-helm) in the [Upgrading Longhorn Manager](https://longhorn.io/docs/1.6.3/deploy/upgrade/longhorn-manager/) documentation. So in this step only the Longhorn Manager gets updated. After Ansible applied the update/upgrade wait till all Longhorn Pods are ready again.
222222

223-
By default the Longhorn volume engines are NOT upgraded automatically. That means in the `Volumes` overview of the Longhorn UI one needs to click on the burger menu in the `Operation` column and run `Upgrade Engine`. To make yourself live easier, **make sure** that all volumes are in `Healthy` state before you upgrade anything. If you want to avoid this manual task of upgrading the volumes to the latest engine version you can set `Concurrent Automatic Engine Upgrade Per Node Limit` to `1` e.g. in `Settings / General` in the Longhorn UI. This setting controls how Longhorn automatically upgrades volumes' engines after upgrading Longhorn manager. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is `0`, Longhorn will not automatically upgrade volumes' engines to default version. For further information see [Automatically Upgrading Longhorn Engine](https://longhorn.io/docs/1.6.1/deploy/upgrade/auto-upgrade-engine/). If you want to do the volume engine upgrade manually have a look at [Manually Upgrading Longhorn Engine](https://longhorn.io/docs/1.6.1/deploy/upgrade/upgrade-engine/)
223+
By default the Longhorn volume engines are NOT upgraded automatically. That means in the `Volumes` overview of the Longhorn UI one needs to click on the burger menu in the `Operation` column and run `Upgrade Engine`. To make yourself live easier, **make sure** that all volumes are in `Healthy` state before you upgrade anything. If you want to avoid this manual task of upgrading the volumes to the latest engine version you can set `Concurrent Automatic Engine Upgrade Per Node Limit` to `1` e.g. in `Settings / General` in the Longhorn UI. This setting controls how Longhorn automatically upgrades volumes' engines after upgrading Longhorn manager. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is `0`, Longhorn will not automatically upgrade volumes' engines to default version. For further information see [Automatically Upgrading Longhorn Engine](https://longhorn.io/docs/1.6.3/deploy/upgrade/auto-upgrade-engine/). If you want to do the volume engine upgrade manually have a look at [Manually Upgrading Longhorn Engine](https://longhorn.io/docs/1.6.3/deploy/upgrade/upgrade-engine/)
224224

225-
Of course you should consult Longhorn's [upgrade guide](https://longhorn.io/docs/1.6.1/deploy/upgrade/) (the link is for upgrading to Longhorn `v1.6.1`) to check for major changes and stuff like that before upgrading. Now is also a good time to check if the backups are in place and if the backups are actually valid ;-)
225+
Of course you should consult Longhorn's [upgrade guide](https://longhorn.io/docs/1.6.3/deploy/upgrade/) (the link is for upgrading to Longhorn `v1.6.3`) to check for major changes and stuff like that before upgrading. Now is also a good time to check if the backups are in place and if the backups are actually valid ;-)
226226

227-
After consulting Longhorn's [upgrade guide](https://longhorn.io/docs/1.6.1/deploy/upgrade/) you basically only need to change `longhorn_chart_version` variable e.g. from `1.5.4` to `1.5.5` for a patch release or from `1.5.5` to `1.6.1` for a major upgrade. And of course the Helm values need to be adjusted for potential breaking changes (if any are mentioned in the upgrade guide e.g.). But please remember that MOST settings shouldn't be changed anymore via the Helm values file but via the Longhorn UI in `Settings / General` e.g.
227+
After consulting Longhorn's [upgrade guide](https://longhorn.io/docs/1.6.3/deploy/upgrade/) you basically only need to change `longhorn_chart_version` variable e.g. from `1.5.4` to `1.5.5` for a patch release or from `1.5.5` to `1.6.3` for a major upgrade. And of course the Helm values need to be adjusted for potential breaking changes (if any are mentioned in the upgrade guide e.g.). But please remember that MOST settings shouldn't be changed anymore via the Helm values file but via the Longhorn UI in `Settings / General` e.g.
228228

229-
You can also use the upgrade method if you keep the version number and just want to change some Helm values or other settings. But please be aware that changing some of settings might have some serious consequences if you already have volumes deployed! Not all Longhorn settings can be changed just by changing a number or a string. So you really want to consult the [Settings reference](https://longhorn.io/docs/1.6.1/references/settings/) to figure out what might happen if you change this or that setting or what you need to do before you apply a changed setting!
229+
You can also use the upgrade method if you keep the version number and just want to change some Helm values or other settings. But please be aware that changing some of settings might have some serious consequences if you already have volumes deployed! Not all Longhorn settings can be changed just by changing a number or a string. So you really want to consult the [Settings reference](https://longhorn.io/docs/1.6.3/references/settings/) to figure out what might happen if you change this or that setting or what you need to do before you apply a changed setting!
230230

231231
That said to actually do the update/upgrade run
232232

@@ -246,7 +246,7 @@ ansible-playbook \
246246
k8s.yml
247247
```
248248

249-
Longhorn has a [Deleting Confirmation Flag](https://longhorn.io/docs/1.6.1/references/settings/#deleting-confirmation-flag) which is set to `false` by default. In this case Longhorn refuses to be uninstalled. By setting `--extra-vars longhorn_delete=true` the Ansible role will set this flag to `true` and afterwards the Longhorn resources can be deleted by the role. Without `longhorn_delete` variable the role will refuse to finish uninstallation.
249+
Longhorn has a [Deleting Confirmation Flag](https://longhorn.io/docs/1.6.3/references/settings/#deleting-confirmation-flag) which is set to `false` by default. In this case Longhorn refuses to be uninstalled. By setting `--extra-vars longhorn_delete=true` the Ansible role will set this flag to `true` and afterwards the Longhorn resources can be deleted by the role. Without `longhorn_delete` variable the role will refuse to finish uninstallation.
250250

251251
## Setting/removing node labels
252252

defaults/main.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# SPDX-License-Identifier: GPL-3.0-or-later
44

55
# Helm chart version
6-
longhorn_chart_version: "1.6.1"
6+
longhorn_chart_version: "1.6.3"
77

88
# Helm release name
99
longhorn_release_name: "longhorn"

molecule/default/converge.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,14 @@
22
# Copyright (C) 2023 Robert Wimmer
33
# SPDX-License-Identifier: GPL-3.0-or-later
44

5+
- name: Gather facts
6+
hosts: all
7+
become: true
8+
gather_facts: true
9+
tasks:
10+
- name: Populate Ansible hostVars
11+
ansible.builtin.setup:
12+
513
- name: Setup Longhorn
614
hosts: all
715
environment:

molecule/default/group_vars/all.yml

Lines changed: 50 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ harden_linux_ntp: "systemd-timesyncd"
77

88
# Password for user "root" and "vagrant" is "vagrant" in both cases. As
99
# "vagrant" user is available in every Vagrant Ubuntu Box just use it.
10-
harden_linux_root_password: "$6$rounds=656000$mysecretsalt$fpyQ9hMON6iKKuM0Rz10WZKNJR4OkQTVfBmd4SPrJPsU9XmgQGRtogcUnFB5FLRIitswuQFr6Tr8Mos9l7Ojm0"
10+
harden_linux_root_password: "$6$ec6PmcEygP6do8Ls$847Pqqo1fXJFeMvPkmP3ipLQ9vhny1PYtwnnIptpZ1Sc8KXUuPGu29aUTOdNdgIfxR3Bix5SUkNfSMMCetej41"
1111
harden_linux_deploy_user: "vagrant"
12-
harden_linux_deploy_user_password: "$6$rounds=656000$mysecretsalt$fpyQ9hMON6iKKuM0Rz10WZKNJR4OkQTVfBmd4SPrJPsU9XmgQGRtogcUnFB5FLRIitswuQFr6Tr8Mos9l7Ojm0"
12+
harden_linux_deploy_user_password: "$6$ec6PmcEygP6do8Ls$847Pqqo1fXJFeMvPkmP3ipLQ9vhny1PYtwnnIptpZ1Sc8KXUuPGu29aUTOdNdgIfxR3Bix5SUkNfSMMCetej41"
1313
harden_linux_deploy_user_home: "/home/vagrant"
1414
harden_linux_deploy_user_uid: "1000"
1515
harden_linux_deploy_user_shell: "/bin/bash"
@@ -28,30 +28,6 @@ harden_linux_sshd_settings_user:
2828
"^PasswordAuthentication": "PasswordAuthentication yes"
2929
"^PermitRootLogin": "PermitRootLogin yes"
3030

31-
# Open a few ports for ssh, Wireguard, HTTP, HTTPS and SMTP.
32-
harden_linux_ufw_rules:
33-
- rule: "allow"
34-
to_port: "22"
35-
protocol: "tcp"
36-
- rule: "allow"
37-
to_port: "51820"
38-
protocol: "udp"
39-
- rule: "allow"
40-
to_port: "80"
41-
protocol: "tcp"
42-
- rule: "allow"
43-
to_port: "443"
44-
protocol: "tcp"
45-
- rule: "allow"
46-
to_port: "25"
47-
protocol: "tcp"
48-
49-
# Allow all traffic from the following networks.
50-
harden_linux_ufw_allow_networks:
51-
- "10.0.0.0/8"
52-
- "172.16.0.0/12"
53-
- "192.168.0.0/16"
54-
5531
# Enable logging for UFW.
5632
harden_linux_ufw_logging: 'on'
5733

@@ -68,6 +44,15 @@ harden_linux_sshguard_whitelist:
6844
- "172.16.0.0/12"
6945
- "192.168.0.0/16"
7046

47+
# DNS
48+
harden_linux_systemd_resolved_settings:
49+
- DNS=
50+
- DNS=8.8.8.8 1.1.1.1 2606:4700:4700::1111 2620:fe::fe
51+
- FallbackDNS=
52+
- FallbackDNS=149.112.112.112 1.0.0.1 2620:fe::9 2606:4700:4700::1001
53+
- DNSOverTLS=
54+
- DNSOverTLS=opportunistic
55+
7156
# Directory where the etcd certificates are stored on the Ansible controller
7257
# host. Certificate files for etcd will be copied from this directory to
7358
# the etcd nodes.
@@ -80,9 +65,14 @@ etcd_interface: "{{ k8s_interface }}"
8065
etcd_settings_user:
8166
"heartbeat-interval": "250"
8267
"election-timeout": "2500"
83-
84-
# Generate certificates for Cilium to be able to connect to "etcd"
85-
# cluster to store its state.
68+
# Host names and IP addresses in the etcd certificates.
69+
etcd_cert_hosts:
70+
- localhost
71+
- 127.0.0.1
72+
# This list should contain all etcd clients that wants to connect to the etcd
73+
# cluster. The most important client is "kube-apiserver" of course. Also
74+
# "cilium" should connect. So we add this here too to generate the needed
75+
# certificates.
8676
etcd_additional_clients:
8777
- cilium
8878
- k8s-apiserver-etcd
@@ -142,9 +132,40 @@ k8s_encryption_config_key: "Y29uZmlndXJhdGlvbjIyCg=="
142132
k8s_apiserver_settings_user:
143133
"enable-aggregator-routing": "true"
144134

135+
k8s_worker_kubelet_settings:
136+
"config": "{{ k8s_worker_kubelet_conf_dir }}/kubelet-config.yaml"
137+
"node-ip": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
138+
"kubeconfig": "{{ k8s_worker_kubelet_conf_dir }}/kubeconfig"
139+
"seccomp-default": ""
140+
145141
# Directory for the "runc" binaries
146142
runc_bin_directory: "/usr/local/sbin"
147143

144+
# Directory to store the "containerd" archive after download
145+
containerd_tmp_directory: "/tmp"
146+
147+
# Use "etcd" for Cilium
148+
cilium_etcd_enabled: "true"
149+
# Delegate Cilium tasks that needs to communicate with the Kubernetes API
150+
# server to the following host.
151+
cilium_delegate_to: "test-assets"
152+
# Template directory for custom "values.yml.j2"
153+
cilium_chart_values_directory: "templates"
154+
# Show debug output for Cilium Helm commands.
155+
cilium_helm_show_commands: true
156+
cilium_etcd_interface: "{{ k8s_interface }}"
157+
cilium_etcd_client_port: 2379
158+
cilium_etcd_nodes_group: "k8s_etcd"
159+
160+
cilium_etcd_secrets_name: "cilium-etcd-secrets"
161+
cilium_etcd_cert_directory: "{{ k8s_ca_conf_directory }}"
162+
cilium_etcd_cafile: "ca-etcd.pem"
163+
cilium_etcd_certfile: "cert-cilium.pem"
164+
cilium_etcd_keyfile: "cert-cilium-key.pem"
165+
166+
# Delegate tasks to create CoreDNS K8s resources to this host.
167+
coredns_delegate_to: "test-assets"
168+
148169
# Common name for "etcd" certificate authority certificates.
149170
ca_etcd_csr_cn: "etcd"
150171
ca_etcd_csr_key_algo: "ecdsa"
@@ -193,10 +214,6 @@ k8s_controller_manager_sa_csr_key_size: "384"
193214
k8s_kube_proxy_csr_key_algo: "ecdsa"
194215
k8s_kube_proxy_csr_key_size: "384"
195216

196-
# Cilium
197-
cilium_delegate_to: "test-assets"
198-
cilium_etcd_cert_directory: "{{ k8s_ca_conf_directory }}"
199-
200217
# Longhorn settings
201218
longhorn_template_output_directory: "/tmp/longhorn"
202219
longhorn_delegate_to: "test-assets"
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
---
2+
# Copyright (C) 2024 Robert Wimmer
3+
# SPDX-License-Identifier: GPL-3.0-or-later
4+
5+
# Allow all traffic from the following networks.
6+
harden_linux_ufw_allow_networks:
7+
- "10.32.0.0/16" # Server Cluster IP range
8+
- "10.200.0.0/16" # Pod IP range
9+
- "10.10.10.0/24" # Wireguard IP range
10+
- "172.16.10.0/24" # VM IP range
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
---
2+
# Copyright (C) 2024 Robert Wimmer
3+
# SPDX-License-Identifier: GPL-3.0-or-later
4+
5+
# Open a few ports for ssh, Wireguard and etcd
6+
harden_linux_ufw_rules:
7+
- rule: "allow"
8+
to_port: "22"
9+
protocol: "tcp"
10+
- rule: "allow"
11+
to_port: "51820"
12+
protocol: "udp"
13+
- rule: "allow"
14+
to_port: "2379"
15+
protocol: "tcp"
16+
- rule: "allow"
17+
to_port: "2380"
18+
protocol: "tcp"

molecule/default/host_vars/test-assets.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22
# Copyright (C) 2023 Robert Wimmer
33
# SPDX-License-Identifier: GPL-3.0-or-later
44

5+
ansible_python_interpreter: "/usr/bin/python3"
6+
57
wireguard_address: "10.10.10.5/24"
68
wireguard_port: 51820
79
wireguard_persistent_keepalive: "30"

molecule/default/host_vars/test-controller1.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,3 +6,12 @@ wireguard_address: "10.10.10.10/24"
66
wireguard_port: 51820
77
wireguard_persistent_keepalive: "30"
88
wireguard_endpoint: "172.16.10.10"
9+
10+
ha_proxy_frontend_bind_address: "127.0.0.1"
11+
ha_proxy_frontend_port: "16443"
12+
13+
k8s_ctl_api_endpoint_host: "127.0.0.1"
14+
k8s_ctl_api_endpoint_port: "16443"
15+
16+
k8s_worker_api_endpoint_host: "{{ k8s_ctl_api_endpoint_host }}"
17+
k8s_worker_api_endpoint_port: "{{ k8s_ctl_api_endpoint_port }}"

0 commit comments

Comments
 (0)