Skip to content

Commit 8778aa6

Browse files
committed
Merge tag 'v1.8.0' of github.com:opencrvs/opencrvs-countryconfig into develop
2 parents 4fa6277 + 424ebd0 commit 8778aa6

File tree

8 files changed

+47
-12
lines changed

8 files changed

+47
-12
lines changed

CHANGELOG.md

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -19,21 +19,25 @@
1919

2020
## 1.6.4
2121

22-
### Bug fixes
23-
24-
- Query the location tree directly from the config service to improve performance for large datasets
22+
- Added a local virtual machine setup for testing Ansible playbooks locally (on MacOS and Ubuntu ). Check [provision.ipynb](infrastructure/local-development/provision.ipynb) for more details.
2523

26-
## 1.6.3
24+
### Improvements
2725

28-
### Breaking changes
26+
- **Upgrade ELK stack** to a AGPLv3 licensed version 8.16.4 [#8749](https://github.com/opencrvs/opencrvs-core/issues/8749)
27+
- Added Build summary and refactored deployment workflow to be more clear [#6984](https://github.com/opencrvs/opencrvs-core/issues/6984)
28+
- Build OpenCRVS release images for arm devices [#9455](https://github.com/opencrvs/opencrvs-core/issues/9455)
29+
- **Introduced `single_node` variable in inventory files** to define whether single-node clusters are allowed, set to false in production to enforce use of at least a two-node cluster. [#6918](https://github.com/opencrvs/opencrvs-core/issues/6918)
30+
- **Github runners upgraded** to latest Ubuntu LTS release 24.04 [#7045](https://github.com/opencrvs/opencrvs-core/issues/7045) and apply sticky node version from .nvmrc [#423](https://github.com/opencrvs/opencrvs-countryconfig/pull/423)
31+
- Updated `seed-data.yml` GitHub Actions workflow to use the new `data-seeder` Docker image instead of cloning the entire `opencrvs-core` repository. This improves CI performance and simplifies the data seeding process. [#8976](https://github.com/opencrvs/opencrvs-core/issues/8976)
2932

30-
- Add constant.humanName to allow countries to customise the format of the full name in the sytem for `sytem users` and `citizens` e.g `{LastName} {MiddleName} {Firstname}`, in any case where one of the name is not provided e.g no `MiddleName`, we'll simply render e.g `{LastName} {FirstName}` without any extra spaces if that's the order set in `country-config`. [#6830](https://github.com/opencrvs/opencrvs-core/issues/6830)
33+
### Bug Fixes
3134

32-
## 1.6.2
35+
- Added `swarm` tag to all tasks within the `swarm.yaml` playbook, previously it was missing. [#9252](https://github.com/opencrvs/opencrvs-core/issues/9252)
36+
- Restrict supported key exchange, cipher and MAC algorithms for SSH configuration [#7542](https://github.com/opencrvs/opencrvs-core/issues/7542)
3337

34-
### New features
38+
## 1.7.3
3539

36-
- Added a local virtual machine setup for testing Ansible playbooks locally (on MacOS and Ubuntu ). Check [provision.ipynb](infrastructure/local-development/provision.ipynb) for more details.
40+
No changes
3741

3842
## 1.7.2
3943

@@ -121,7 +125,6 @@ In order to make the upgrade easier, there are a couple of steps that need to be
121125
- We make sure that the automatic cleanup job only runs before deployment (instead of cron schedule cleanup).
122126
- Previously it was possible MongoDB replica set and users were left randomly uninitialised after a deployment. MongoDB initialisation container now retries on failure.
123127
- On some machines 'file' utility was not preinstalled causing provision to fail. We now install the utility if it doesn't exist.
124-
- Restrict supported key exchange, cipher and MAC algorithms for SSH configuration [#7542](https://github.com/opencrvs/opencrvs-core/issues/7542)
125128

126129
### Infrastructure breaking changes
127130

infrastructure/server-setup/group_vars/all.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,3 +14,4 @@ backup_server_user: 'backup'
1414
backup_server_user_home: '/home/backup'
1515
crontab_user: root
1616
provisioning_user: provision
17+
single_node: false

infrastructure/server-setup/inventory/backup.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ all:
1818
# @todo how many days to store backups for?
1919
amount_of_backups_to_keep: 7
2020
backup_server_remote_target_directory: /home/backup/backups
21+
single_node: true
2122
users:
2223
# @todo this is where you define which development team members have access to the server.
2324
# If you need to remove access from someone, do not remove them from this list, but instead set their state: absent

infrastructure/server-setup/inventory/development.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
# Copyright (C) The OpenCRVS Authors located at https://github.com/opencrvs/opencrvs-core/blob/master/AUTHORS.
99
all:
1010
vars:
11+
single_node: true
1112
users:
1213
# @todo this is where you define which development team members have access to the server.
1314
# If you need to remove access from someone, do not remove them from this list, but instead set their state: absent

infrastructure/server-setup/inventory/production.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ all:
2727
#- 55.55.55.55 # example jump server IP address
2828
[]
2929
backup_server_remote_target_directory: /home/backup/backups
30+
single_node: false # At least 2 nodes are recommended for production environment. Set "single_node: true" at your own discretion.
3031
users:
3132
# @todo this is where you define which development team members have access to the server.
3233
# If you need to remove access from someone, do not remove them from this list, but instead set their state: absent

infrastructure/server-setup/inventory/qa.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
# Copyright (C) The OpenCRVS Authors located at https://github.com/opencrvs/opencrvs-core/blob/master/AUTHORS.
99
all:
1010
vars:
11+
single_node: true
1112
users:
1213
# @todo this is where you define which development team members have access to the server.
1314
# If you need to remove access from someone, do not remove them from this list, but instead set their state: absent

infrastructure/server-setup/inventory/staging.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ all:
2929
# For this you need to first setup a backup environment
3030
periodic_restore_from_backup: false
3131
backup_server_remote_source_directory: /home/backup/backups
32+
single_node: true
3233
users:
3334
# @todo this is where you define which development team members have access to the server.
3435
# If you need to remove access from someone, do not remove them from this list, but instead set their state: absent

infrastructure/server-setup/swarm.yml

Lines changed: 28 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,19 +16,33 @@
1616
rule: allow
1717
port: 2377
1818
proto: tcp
19+
tags:
20+
- swarm
1921

22+
- name: 'Get docker info'
23+
shell: docker info
24+
register: docker_info
25+
changed_when: False
26+
tags:
27+
- swarm
28+
2029
- name: 'Create primary swarm manager'
2130
shell: docker swarm init --advertise-addr {{ ansible_default_ipv4.address }}
2231
when: "docker_info.stdout.find('Swarm: inactive') != -1"
32+
tags:
33+
- swarm
2334

2435
- name: 'Get docker swarm manager token'
2536
shell: docker swarm join-token -q manager
2637
register: manager_token
38+
tags:
39+
- swarm
2740

2841
- name: 'Get docker swarm worker token'
2942
shell: docker swarm join-token -q worker
3043
register: worker_token
31-
44+
tags:
45+
- swarm
3246

3347
- hosts: docker-workers
3448
become: yes
@@ -40,30 +54,40 @@
4054
shell: docker info --format "{% raw %}{{.Swarm.LocalNodeState}}{% endraw %}"
4155
register: worker_swarm_status
4256
changed_when: false
57+
tags:
58+
- swarm
4359

4460
- name: Get NodeID of the worker (if part of a swarm)
4561
shell: docker info --format "{% raw %}{{.Swarm.NodeID}}{% endraw %}"
4662
register: worker_node_id
4763
when: worker_swarm_status.stdout == 'active'
4864
changed_when: false
4965
failed_when: false
66+
tags:
67+
- swarm
5068

5169
- name: Get list of nodes in the manager's swarm
5270
shell: docker node ls --format '{% raw %}{{.ID}}{% endraw %}'
5371
delegate_to: "{{ manager_hostname }}"
5472
register: manager_node_ids
5573
changed_when: false
74+
tags:
75+
- swarm
5676

5777
- name: Fail if the worker is in a different swarm
5878
fail:
5979
msg: "You are trying to attach a worker to a Swarm that is already part of another Swarm. Please make the node leave the current Swarm first, then run the playbook again."
6080
when: worker_swarm_status.stdout == 'active' and worker_node_id.stdout not in manager_node_ids.stdout_lines
81+
tags:
82+
- swarm
6183

6284
- name: Join as a worker
6385
shell: "docker swarm join --token {{ hostvars[manager_hostname]['worker_token']['stdout'] }} {{ hostvars[manager_hostname]['ansible_default_ipv4']['address'] }}:2377"
6486
when: worker_swarm_status.stdout != 'active'
6587
retries: 3
6688
delay: 20
89+
tags:
90+
- swarm
6791

6892
- hosts: docker-manager-first
6993
become: yes
@@ -74,4 +98,6 @@
7498
loop: "{{ groups['docker-manager-first'] + groups['docker-workers'] }}"
7599
loop_control:
76100
loop_var: hostname
77-
when: hostvars[hostname]['data_label'] is defined
101+
when: hostvars[hostname]['data_label'] is defined
102+
tags:
103+
- swarm

0 commit comments

Comments
 (0)