Skip to content

v2.10.0

Latest

Choose a tag to compare

@pchandra19 pchandra19 released this 19 Nov 17:35
· 12 commits to develop since this release

Release v2.10.0

Release Date: 20th November, 2025

Summary

OpenEBS Replicated PV Mayastor version 2.10.0 introduces new features and several critical functional fixes.

What's New

DiskPool Expansion

It's now possible to expand a DiskPool's capacity by expanding the underlying storage device.

NOTE: As a precondition you must create the DiskPool with sufficient metadata to accommodate/support future growth, please read more about this here.

Configurable ClusterSize

You can now configure the cluster size when creating a pool - larger cluster sizes may be beneficial when using very large storage devices.

NOTE: As an initial limitation volumes may not be placed across pools with different cluster sizes

Pool Cordon

Extend cordoning functionality to pools. This can be used to prevent new replicas from being created on a pool, and also as a way of migrating a volume replica out of it via scale-up/scale-down operations.

Orphaned Retain Snapshot Delete

Similar to volumes, when with snapshots retain move are deleted, the underlying storage is kept by the provisioner and must be deleted with provisioner specific commands.
We've added a plugin sub-command to delete these orphaned snapshots safely.

Node Spread

Node spread topology may now be used

Affinity Group ScaleDown

Affinity group volumes may now be scaled down to 1 replica, provided the anti-affinity across nodes is not violated.

Enhancements

  • Update replica health as an atomic etcd transaction
  • Exit io-engine with error if the gRPC port is busy
  • Set PR_SET_IO_FLUSHER for io-engine to resolve potential deadlock
  • Don't let 1 bad nexus lockup the entire nexus subsystem
  • Clean up uuid from DISKS output uri
  • Honor stsAffinity on backup restores via external tools
  • Validate K8s secret in diskpool operator ahead of pool creation request
  • Allow pool creation with zfs volume paths
  • Added support for kubeconfig context switching (kubectl-mayastor plugin)
  • Fixed creating pools on very-slow/very-large storage devices
  • Use udev kernel monitor
  • Fixed race condition where udev events were lost leading to connecting nvme devices
  • Fixed HA enablement on the latest rhel and derivatives
  • Fixed open permissions on call-home encryption dir
  • Configurable ports of services with hostNetwork
  • Add support for 1GiB hugepages
  • etcd dependency updated to 12.0.14
  • Use normalized etcdUrl in default etcd-probe init containers
  • Use correct grpc port in metrics exporter
  • Fix volume mkfs stuck on very large pools/volumes
  • Fix agent-core panic when scheduling replicas
  • Add default priority class to the daemon sets

Testing

Mayastor is subject to extensive unit, component and system-level testing throughout the development and release cycle. Resources for system-level (E2E) testing are currently provided by DataCore Software.

At this time, personnel and hardware resource limitations constrain testing by the maintainers to linux builds on x86. This reflects the primary use-case which the maintainers are currently targeting with the OpenEBS Mayastor project. Therefore, the use of Mayastor with other operating systems and/or architectures, if even possible, should be considered serendipitous and wholly experimental.

This release has been subject to End-to-End testing under Ubuntu 20.04.5_LTS (kernel: ubuntu-5.15.0-50-generic)

  • Tested k8s versions
    • 1.23.7
    • 1.24.14
    • 1.25.10
    • 1.29.6-1.1

Known Behavioural Limitations

  • The IO engine fully utilizes all allocated CPU cores regardless of the actual I/O load, as it runs a poller at full speed.
  • Each DiskPool is limited to a single block device and cannot span across multiple devices.
  • The data-at-rest encryption feature does not support rotation of Data Encryption Keys (DEKs).
  • Volume rebuilds are only performed on published volumes.

Known Issues

  • if a node hosting a pod reboots and the pod lacks a controller (like a Deployment), the volume unpublish operation may not trigger. This causes the control plane to assume the volume is still in use, which leads to fsfreeze operation failure during snapshots.
    Workaround: Recreate or rebind the pod to ensure proper volume mounting.
  • If a disk backing a DiskPool fails or is removed (Example: A cloud disk detaches), the failure is not clearly reflected in the system. As a result, the volume may remain in a degraded state for an extended period.
  • Large pools (Example: 10–20TiB) may take a while during recovery after a dirty shutdown of the node hosting the io-engine.

New Contributors

Full Changelog: