Skip to content

allow passing arguments to storage pool creation#930

Open
WanzenBug wants to merge 1 commit intov2from
storage-pool-create-arguments
Open

allow passing arguments to storage pool creation#930
WanzenBug wants to merge 1 commit intov2from
storage-pool-create-arguments

Conversation

@WanzenBug
Copy link
Member

LINSTOR 1.33.0 added the option to pass argument to calls to pv/vg/lv/zpool create. Expose this when in the storage pool settings.

Since this option is only useful when the storage pool gets first created, specifically when it is being created from empty devices, the admission webhook has been extended to generate a number of warnings, specifically when:

  • changing the arguments for an existing storage pool.
  • adding arguments when no source is set, making the arguments useless.

During testing, I discovered that we had the arguments to ValidateUpdate() the wrong way around: we switched the "old" and "new" arguments around, leading to confusing error messages.

LINSTOR 1.33.0 added the option to pass argument to calls to
pv/vg/lv/zpool create. Expose this when in the storage pool settings.

Since this option is only useful when the storage pool gets first created,
specifically when it is being created from empty devices, the admission webhook
has been extended to generate a number of warnings, specifically when:

* changing the arguments for an existing storage pool.
* adding arguments when no source is set, making the arguments useless.

Signed-off-by: Moritz Wanzenböck <moritz.wanzenboeck@linbit.com>
@WanzenBug WanzenBug force-pushed the storage-pool-create-arguments branch from f1cdac6 to 5ea6346 Compare March 5, 2026 11:58
@WanzenBug WanzenBug requested a review from JoelColledge March 5, 2026 11:58
@JoelColledge
Copy link
Collaborator

I tried this on a cluster with LINSTOR that is too old (v1.32.3). It caused rather cryptic errors:

$ k linstor sp l
...
┊ thinpool             ┊ k8s-1-34-30.test ┊ LVM_THIN ┊ linstor_thinpool/thinpool ┊        0 KiB ┊         0 KiB ┊ True         ┊ Error ...
ERROR:
Description:
    Node: 'k8s-1-34-30.test', storage pool: 'thinpool' - Failed to query free space from storage pool
Cause:
    Volume group 'linstor_thinpool' not found

There are related error reports:

Error context:
        Registration of storage pool 'thinpool' on node 'k8s-1-34-31.test' failed due to an unhandled exception of type DelayedApiRcException. Exceptions have been converted to responses
Asynchronous stage backtrace:
        (k8s-1-34-31.test) Volume group 'linstor_thinpool' not found

My storage config:

apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
  name: storage-satellites
spec:
  storagePools:
    - name: thinpool
      lvmThinPool:
        volumeGroupCreateArguments: ["-s", "512KiB"]
      source:
        hostDevices:
          - /dev/vdb

I didn't see any errors in the operator logs. It looks like it silently skipped the create-device-pool command.

With LINSTOR v1.32.3, but without volumeGroupCreateArguments, the storage pools are created correctly.

With LINSTOR v1.33.1, the above config with volumeGroupCreateArguments works fine.

I would expect an error related to failing to create the LVM VG somewhere and not to have broken storage pools added to LINSTOR. Can we improve the failure behavior in this scenario?

@WanzenBug
Copy link
Member Author

I would expect an error related to failing to create the LVM VG somewhere and not to have broken storage pools added to LINSTOR. Can we improve the failure behavior in this scenario?

I expect the trouble is that in cases where the create-device-pool command fails, we assume it is because the pool is already set up. And then we try to run a regular storage-pool create ..., which leads to the above error.

Ideally, we would get all the necessary information directly from LINSTOR. The reason for the approach we have right now is that the physical-storage list command does not always show the disks people actually want to use. Either because instead of /dev/sdb they use some SCSI ID path from udev or because they want to use a partition, which does not show up at all.

Interestingly, while these paths do not show up, they can still be use in the cdp command. Perhaps something that can be improved in LINSTOR. Another idea would be to let LINSTOR actively list the available VGs and zpools, even those not configured for storage pools.

$ k linstor ps l
╭───────────────────────────╮
┊ Size ┊ Rotational ┊ Nodes ┊
╞═══════════════════════════╡
╰───────────────────────────╯
$ k linstor ps cdp --pool-name vg1/thin --storage-pool thin1 lvmthin k8s-10.test /dev/vdb1
SUCCESS:
    (k8s-10.test) PV for device '/dev/vdb1' created.
SUCCESS:
    (k8s-10.test) VG for devices [/dev/vdb1] with name 'vg1' created.
SUCCESS:
    (k8s-10.test) Thin-pool 'thin' in LVM-pool 'vg1' created.
SUCCESS:
....

I don't know how to properly work around all of these limitations for now. In the past, there has actually been the idea to move this configuration out of LINSTOR completely, perhaps making use of a separate component to configure storage pools altogehter 🤔

@JoelColledge
Copy link
Collaborator

I expect the trouble is that in cases where the create-device-pool command fails, we assume it is because the pool is already set up.

I don't know how to properly work around all of these limitations for now. In the past, there has actually been the idea to move this configuration out of LINSTOR completely, perhaps making use of a separate component to configure storage pools altogehter 🤔

Tricky. Perhaps the best we can do for now is to catch the misconfigurations that we can in advance. In this case, raise an error if volumeGroupCreateArguments is used but the Controller version is too low.

Or we parse the error from LINSTOR. If it matches a "device pool already set up" pattern then we continue creating the storage pool, otherwise raise an error.

Or better than parsing the message, we check the ret_code. Perhaps that provides enough information to differentiate between errors on the controller side vs satellite side?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants