Magic Castle 7.3
This release introduces three main features:
- Add support for Slurm 20
- Add support for CentOS 8. Tested functional on GCP and OpenStack. AWS and Azure do not provide
an official CentOS 8 image with cloud-init support at the moment of this release. - Add support for Compute Canada Arbutus Cloud NVIDIA VGPUs (flavor
vgpu-...
).
Changed
- Improved main documentation.
- [AWS] Most resources if not all now have the name of the cluster as a prefix in their name
- [OpenStack] Simplified volume attachment count computation
- [puppet] Slurm plugin spank-cc-tmpfs_mounts is now installed from copr yumrepo
- [puppet] Fixed order of slurm packages install
- [puppet] Exec resource in charge of creating the slurm cluster in slurmdbd now returns 0 if the cluster already exists
- [puppet]
consul-template
class initialization is now entirely in hieradata filecommon.yaml
. - [puppet] CentOS 8 support: replaced notification of
nfs-idmap.service
by notification ofnfs-server.service
. - [puppet] CentOS 8 support: replaced
pdsh
byclustershell
- [puppet] CentOS 8 support: rpc_nfs_args is now only defined if os is CentOS 7.
- [puppet] CentOS 8 support:
ipa_create_user.py
now use/usr/libexec/platform-python
instead of/usr/bin/env python
. - [puppet] CentOS 8 support: Replaced Python 2
unicode
calls inipa_create_user.py
by six'stext_type
- [puppet] CentOS 8 support: Moved list of nvidia package names from class profile::gpu to hieradata. List now depends on CentOS version.
- [puppet] CentOS 8 support: Moved FreeIPA
regen_cert_cmd
value to hieradata. Command now depends on CentOS version. - [puppet] Bumped puppet-jupyterhub version to 3.3.2
- [puppet] Update nvidia driver fact to make sure at most one version is in the output
- [puppet] Changed logic of
nvidia_grid_vgpu
fact to just check if the instance flavor includesvgpu
in its name - [puppet] CentOS 8 support: Moved default loaded CVMFS modules to hieradata. Module list now depends on CentOS version
- [puppet] CentOS 8 support: Fixed nfs clean rbind execstop warning
- [puppet] Replaced tcp_con_validator to check if slurmdbd is running by a wait_for ressource on slurmdbd.log regex
- [puppet] CentOS 8 support: Fixed package name in nvidia-driver-version fact.
- [cloud-init] Replaced
reboot -n
inruncmd
bypower_state
with reboot now. This makes sure final stage of cloud-init is applied before reboot. - [gcp] CentOS 8 support: rewrote
install_cloudinit.sh
to avoid network issue at boot and install cloud-init only for the time needed. (issue #85)
Added
- [puppet] Added support for CentOS 8 when selecting Slurm yumrepo
- [puppet] Slurm 20 support: Added
slurm_version
variable to hieradata. It can be either 19 or 20. - [puppet] Slurm 20 support: Added PlugStackConfig parameter to slurm.conf
- [puppet] Added slurm-perlapi package to
profile::base::slurm
- [puppet] Added exec to initialize cvmfs default.local with consul-template.
- [puppet] Added a default node1 in slurm.conf when no slurmd has been registered yet in consul
- [puppet] Added a require on Epel yumrepo for package fail2ban-server
- [puppet] Added class profile::fail2ban::install
- [puppet] CentOS 8 support: Added dependency on puppet-epel to install epel yumrepo
- [puppet] CentOS 8 support: Enabled powertools repo
- [puppet] CentOS 8 support: Enabled idm:DL1 stream
- [puppet] CentOS 8 support: Added network-scripts package when os is CentOS 8
- [puppet] CentOS 8 support: Added munge_socket selinux policy to allow confined user to submit jobs
- [puppet] Added class
profile::gpu::install
- [puppet] Added a requirement on epel yumrepo for singularity package.
- [puppet] Added a requirement for slurm exec
create_account
on slurm execadd_cluster
- [puppet] CentOS 8 support: added class
profile::mail::server
- [puppet] Added a requirement on yumrepo epel to class
jupyterhub
inprofile::jupyterhub::hub
- [puppet] Added support for Compute Canada Arbutus Cloud VGPUs