Description
Describe the bug
Docs state: "NOTE: auto_delete_disk is set to true in this example, which means that running terraform destroy also deletes the disk. To retain the disk after terraform destroy either set this to false or don't include the settings so it defaults to false. Note that with auto_delete_disk: false, you will need to manually delete the disk after destroying a deployment group with nfs-server."
However, the shared volume is deleted regardless of auto_delete_disk: false
being set in blueprint.
I do notice that the docs state "WARNING: This module has only been tested against the HPC centos7 OS disk image (the default). Using other images may work, but have not been verified." and have tested using an older Centos 7 image with the same result (disk is deleted).
Steps to reproduce
Steps to reproduce the behavior:
- Create a new cluster with a share created by the nfs-server module (example below)
- Delete the cluster
- Once deleted the volume will be gone too despite 'auto_delete_disk' being explicitly set to false
Expected behavior
The volume should remain for future use.
Actual behavior
The volume is deleted.
Version (gcluster --version
)
Built from 'develop' branch.
Commit info: v1.45.0-25-gf5349cac
Terraform version: 1.9.8
Blueprint
If applicable, attach or paste the blueprint YAML used to produce the bug.
---
blueprint_name: hpc-slurm
vars:
project_id: your-project-here
deployment_name: hpc-slurm
region: us-central1
zone: us-central1-a
deployment_groups:
- group: primary
modules:
- id: network
source: modules/network/vpc
- id: homefs
source: community/modules/file-system/nfs-server
use: [network]
settings:
auto_delete_disk: false
local_mounts: [/home]
# instance_image:
# family: schedmd-slurm-21-08-8-hpc-centos-7
# project: schedmd-slurm-public
- id: debug_nodeset
source: community/modules/compute/schedmd-slurm-gcp-v6-nodeset
use: [network]
settings:
node_count_dynamic_max: 4
machine_type: n2-standard-2
enable_placement: false
allow_automatic_updates: false
instance_image_custom: false
instance_image:
project: schedmd-slurm-public
family: slurm-gcp-6-8-hpc-rocky-linux-8
- id: debug_partition
source: community/modules/compute/schedmd-slurm-gcp-v6-partition
use:
- debug_nodeset
settings:
partition_name: debug
exclusive: false
is_default: true
- id: slurm_login
source: community/modules/scheduler/schedmd-slurm-gcp-v6-login
use: [network]
settings:
machine_type: n2-standard-4
enable_login_public_ips: true
- id: slurm_controller
source: community/modules/scheduler/schedmd-slurm-gcp-v6-controller
use:
- network
- debug_partition
- homefs
- slurm_login
settings:
enable_controller_public_ips: true
Output and logs
N/A
Screenshots
N/A
Execution environment
- OS: macOS
- Shell: zsh
- go version: go version go1.23.4 darwin/arm64
If there's questions, clarification, or further testing I can provide please let me know. Many thanks.
Activity