Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions docs/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ module.exports = {
onBrokenLinks: "throw",
onBrokenMarkdownLinks: "warn",
favicon: "/docs/img/favicon.ico",
organizationName: "openebs", // Usually your GitHub org/user name.
projectName: "website", // Usually your repo name.
organizationName: "openebs",
projectName: "website",
i18n: {
defaultLocale: 'en',
locales: ['en'],
Expand Down Expand Up @@ -161,12 +161,16 @@ module.exports = {
label: 'main',
path: 'main'
}
}
},

// ✅ Global TOC settings
tocMinHeadingLevel: 2,
tocMaxHeadingLevel: 2,
},
theme: {
customCss: require.resolve("./src/scss/custom.scss"),
},
include: ["**/*.md", "**/*.mdx"], // Extensions to include.
include: ["**/*.md", "**/*.mdx"],
},
],
[
Expand Down
2 changes: 1 addition & 1 deletion docs/main/quickstart-guide/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ helm ls -n openebs

```
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
openebs openebs 1 2025-05-25 09:13:00.903321318 +0000 UTC deployed openebs-4.3.2 4.3.2
openebs openebs 1 2025-11-21 08:11:00.903321318 +0000 UTC deployed openebs-4.4.0 4.4.0
```

## Verifying OpenEBS Installation
Expand Down
113 changes: 91 additions & 22 deletions docs/main/releases.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
id: releases
title: OpenEBS Releases
title: OpenEBS Release Notes
keywords:
- OpenEBS Releases
- OpenEBS Release Notes
Expand All @@ -9,33 +9,30 @@ keywords:
description: This page contains list of supported OpenEBS releases.
---

**Release Date: DD MONTH YYYY**
**Release Date: 21 November 2025**

OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners.
The status of the various components as of v4.4 are as follows:

- Local Storage (a.k.a Local Engine)
- [Local PV Hostpath 4.3.0](https://github.com/openebs/dynamic-localpv-provisioner) (stable)
- [Local PV LVM 1.7.0](https://github.com/openebs/lvm-localpv) (stable)
- [Local PV ZFS 2.8.0](https://github.com/openebs/zfs-localpv) (stable)

- Replicated Storage (a.k.a Replicated Engine)
- [Replicated PV Mayastor 2.9.0](https://github.com/openebs/mayastor) (stable)

- Out-of-tree (External Storage) Provisioners
- [Local PV Hostpath 4.3.0](https://github.com/openebs/dynamic-localpv-provisioner) (stable)

- Other Components
- [CLI 4.3.0](https://github.com/openebs/openebs/tree/release/4.3/plugin)
| Component Type | Component | Version | Status |
| :--- | :--- | :--- | :--- |
| Replicated Storage | Replicated PV Mayastor | 2.10.0 | Stable |
| Local Storage | Local PV Hostpath | 4.4.0 | Stable |
| Local Storage | Local PV LVM | 1.8.0 | Stable |
| Local Storage | Local PV ZFS | 2.9.0 | Stable |
| External Provisioners | Local PV Hostpath | 4.4.0 | Stable |
| Other Components | CLI | 4.4.0 | — |

## What’s New

OpenEBS is delighted to introduce the following new features with OpenEBS 4.4:
### General

- **Support for Installing Both Replicated PV Mayastor and Local PV LVM on OpenShift**

You can now install both Replicated PV Mayastor and Local PV LVM on OpenShift using a unified Helm-based deployment process. In earlier releases, only Replicated PV Mayastor installation was supported on OpenShift.

### Replicated Storage

- **DiskPool Expansion**

You can now expand existing Replicated PV Mayastor DiskPools using the maxExpansion parameter. This feature allows controlled, on-demand capacity increases while preventing ENOSPC errors and ensuring uninterrupted application availability.
Expand All @@ -48,44 +45,116 @@ OpenEBS is delighted to introduce the following new features with OpenEBS 4.4:

You can now configure the SPDK blobstore cluster size when creating Replicated PV Mayastor DiskPools. This option lets you fine-tune on-disk layout and performance for your workloads—using smaller clusters for efficiency or larger clusters for faster pool creation, imports, and sequential I/O operations.

- **Kubeconfig Context Switching for `kubectl-mayastor`**

The `kubectl-mayastor` plugin now supports kubeconfig context switching, making it easier for administrators to manage multi-cluster environments.

- **Support for 1GiB HugePages**

1 GiB HugePages are now supported, enabling improved performance for memory-intensive workloads and providing greater flexibility when tuning systems for high-performance environments.

### Local Storage

- **Local PV LVM Snapshot Restore**

Snapshot Restore is now supported for Local PV LVM. This brings Local PV LVM to parity with Replicated PV Mayastor and Local PV ZFS, which already supported snapshot-based volume restoration.

## Enhancements

### Replicated Storage

-
- **Improved Replica Health Management**

Replica health updates are now performed as an atomic etcd transaction, significantly improving consistency and reliability during replica state changes.

- **Enhanced Nexus Subsystem Stability**

The system now ensures that a single unhealthy nexus cannot impact or block the entire nexus subsystem, improving overall storage resiliency and workload stability.

- **Pre-Validation of Kubernetes Secrets for DiskPools**

The diskpool operator now validates Kubernetes secrets before creating a pool, providing earlier error detection and faster troubleshooting.

- **Improved Device Event Handling via udev Kernel Monitor**

Device detection has been improved by using the udev kernel monitor, providing faster and more reliable NVMe device event handling.

### Local Storage

-
- **ThinPool Space Reclamation Improvements**

Local PV LVM now automatically cleans up the thinpool Logical Volume (LV) when the last thin volume associated with the thinpool is deleted. This optimization helps reclaim storage space efficiently.

- **Configurable Resource Requests and Limits for Local PV ZFS Components**

You can now configure CPU and memory requests and limits for all `zfs-node` and `zfs-controller` containers through the `values.yaml` file. This enhancement provides greater control over resource allocation and improves deployment flexibility across diverse cluster environments.

## Fixes

### Replicated Storage

- **Resolved Lost udev Events Affecting NVMe Devices**

Fixed a race condition where missing udev events caused NVMe devices to fail to connect. Device discovery is now more reliable.

- **Improved Pool Creation on Slow or Large Storage Devices**

Fixed an issue where pool creation could fail or time out on very slow or very large storage devices.

- **Correct gRPC Port Usage in Metrics Exporter**

Resolved an issue where the metrics exporter could use an incorrect gRPC port, ensuring accurate metrics collection.

- **Fix for mkfs Hanging on Large Pools/Volumes**

Resolved an issue where filesystem creation could hang on very large pools or volumes, improving provisioning reliability.

- **Agent-Core Panic During Replica Scheduling**

Fixed a panic in agent-core when scheduling replicas, improving system stability during heavy provisioning operations.

### Local Storage

-
- **PVC Provisioning Failure with Empty Selector**

Resolved an issue where PersistentVolumeClaim (PVC) provisioning for Local PV Hostpath volumes could fail when the `.spec.selector` field was left empty. PVCs without a selector now provision successfully as expected.

- **Corrected Scheduling Behavior for Local PV LVM**

Scheduling logic for Local PV LVM has been corrected to ensure reliable provisioning. Thinpool statistics are now properly recorded, thinpool free space is considered during scheduling, and CreateVolume requests for thick PVCs now fail early when insufficient capacity is available.

- **Correct Encryption Handling for Local PV ZFS Clone Operations**

Resolved an issue where Local PV ZFS clone creation attempted to set a read-only encryption property. Clone volumes now correctly inherit encryption from their parent snapshots without passing unsupported parameters.

## Known Issues

### Replicated Storage

- DiskPool capacity expansion is not supported as of v2.9.0.
- If a node hosting a pod reboots and the pod lacks a controller (like a Deployment), the volume unpublish operation may not trigger. This causes the control plane to assume the volume is still in use, which leads to `fsfreeze` operation failure during snapshots.

**Workaround:** Recreate or rebind the pod to ensure proper volume mounting.

- If a disk backing a DiskPool fails or is removed (Example: A cloud disk detaches), the failure is not clearly reflected in the system. As a result, the volume may remain in a degraded state for an extended period.

- Large pools (Example: 10–20TiB) may hang during recovery after a dirty shutdown of the node hosting the io-engine.

- Provisioning very large filesystem volumes (Example: More than 15TiB) may fail due to filesystem formatting timeouts or hangs.

- When using Replicated PV Mayastor on Oracle Linux 9 (kernel 5.14.x), servers may unexpectedly reboot during volume detach operations due to a kernel bug (CVE-2024-53170) in the block layer.
This issue is not caused by Mayastor but is triggered more frequently because of its NVMe-TCP connection lifecycle.

**Workaround:** Upgrade to kernel 6.11.11, 6.12.2, or later, which includes the fix.

### Local Storage

- For Local PV LVM and Local PV ZFS, you may face issues on single-node setups post-upgrade where the controller pod does not enter the `Running` state due to changes in the manifest and missing affinity rules.

**Workaround:** Delete the old controller pod to allow scheduling of the new one. This does not occur when upgrading from the previous release.

- For Local PV LVM, thin pool capacity is not unmapped or reclaimed and is also not tracked in the `lvmnode` custom resource. This may result in unexpected behavior.

## Limitations (If any)
## Limitations

### Replicated Storage

Expand All @@ -95,7 +164,7 @@ This issue is not caused by Mayastor but is triggered more frequently because of

## Related Information

OpenEBS Release notes are maintained in the GitHub repositories alongside the code and releases. For summary of what changes across all components in each release and to view the full Release Notes, see [OpenEBS Release 4.4](https://github.com/openebs/openebs/releases/tag/v4.4).
OpenEBS Release notes are maintained in the GitHub repositories alongside the code and releases. For release summaries and full version-level notes, see [OpenEBS Release 4.4](https://github.com/openebs/openebs/releases/tag/v4.4).

See version specific Releases to view the legacy OpenEBS Releases.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,11 @@ description: This guide explains about the Snapshot Restore feature.

## Overview

Volume restore from an existing snapshot will create a storage volume captured at a specific point in time. They serve as an essential tool for data protection, recovery, and efficient management in Kubernetes environments. This document provides step-by-step instructions to restore a volume from a previously created snapshot using Local PV LVM.
Volume Restore from an existing snapshot will create a storage volume captured at a specific point in time. They serve as an essential tool for data protection, recovery, and efficient management in Kubernetes environments. This document provides step-by-step instructions to restore a volume from a previously created snapshot using Local PV LVM.

:::important
Volume Restore is supported only for thin volumes created from snapshots using OpenEBS v4.4.0 or later.
:::

## Requirements

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,6 @@ In this case, the volume replicas will be provisioned on any two of the three no
- `worker-node-2` and `worker-node-3`
as the storage class has `rack` as the value for `nodeHasTopologyKey` that matches the label key of the node.

<!--
## "nodeSpreadTopologyKey"

The parameter `nodeSpreadTopologyKey` will allow the placement of replicas on the node that has label keys that are identical to the keys specified in the storage class but have different values.
Expand Down Expand Up @@ -144,8 +143,6 @@ In this case, the volume replicas will be provisioned on the below given nodes i
- `worker-node-2` and `worker-node-3`
as the storage class has `zone` as the value for `nodeSpreadTopologyKey` that matches the label key of the node but has a different value.

-->

## "poolAffinityTopologyLabel"

The parameter `poolAffinityTopologyLabel` will allow the placement of replicas on the pool that exactly match the labels defined in the storage class.
Expand Down Expand Up @@ -267,7 +264,7 @@ In this case, the volume replicas will be provisioned on any two of the three po
- `pool-on-node-2` and `pool-on-node-3`
as the storage class has `zone` as the value for `poolHasTopologyKey` that matches with the label key of the pool.

## "stsAffinityGroup"
## "stsAffinityGroup"

`stsAffinityGroup` represents a collection of volumes that belong to instances of Kubernetes StatefulSet. When a StatefulSet is deployed, each instance within the StatefulSet creates its own individual volume, which collectively forms the `stsAffinityGroup`. Each volume within the `stsAffinityGroup` corresponds to a pod of the StatefulSet.

Expand All @@ -287,6 +284,16 @@ The `stsAffinityGroup` ensures that in such cases, the targets are distributed o

By default, the `stsAffinityGroup` feature is disabled. To enable it, modify the storage class YAML by setting the `parameters.stsAffinityGroup` parameter to true.

### Volume Affinity Group Scale-Down Restrictions

When using stsAffinityGroup, replicas of volumes belonging to the same StatefulSet are distributed across different nodes to avoid a single point of failure. Due to these anti-affinity rules, scaling a volume down to 1 replica may be restricted, if doing so would cause the last remaining replica to reside on a node that already hosts another single-replica volume from the same affinity group.

Scale-down to 1 replica is allowed only when the current replicas are already placed on different nodes. If the replicas end up on the same node. For example, after scaling from 3 replicas to 2, the system may block the scale-down until the placement is improved.

If a scale-down is blocked, you can resolve it by temporarily scaling the volume up to add a replica (allowing the system to place it on a different node) and then scaling down again. This reshuffles the replicas to meet the affinity group’s placement rules.

These restrictions ensure that a single node failure does not impact multiple StatefulSet instances, preserving fault isolation and reliability for applications using affinity-grouped volumes.

## "cloneFsIdAsVolumeId"

`cloneFsIdAsVolumeId` is a setting for volume clones/restores with two options: `true` and `false`. By default, it is set to `false`.
Expand Down
12 changes: 6 additions & 6 deletions docs/main/user-guides/upgrades.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,15 @@ Refer to the [Migration documentation](../user-guides/data-migration/migration-o

## Overview

This upgrade process allows you to upgrade to the latest OpenEBS version 4.3 which is a unified installer for three Local Storages (a.k.a Local Engines):
This upgrade process allows you to upgrade to the latest OpenEBS version 4.4 which is a unified installer for three Local Storages (a.k.a Local Engines):
- Local PV HostPath
- Local PV LVM
- Local PV ZFS

and one Replicated Storage (a.k.a Replicated Engine):
- Replicated PV Mayastor

As a part of the upgrade to OpenEBS 4.3, the Helm chart will install all four engines regardless of the engine you used before the upgrade.
As a part of the upgrade to OpenEBS 4.4, the Helm chart will install all four engines regardless of the engine you used before the upgrade.

:::info
During the upgrade, if you are only interested in Local PV Storage, you can disable Replicated PV Mayastor by using the below option:
Expand All @@ -46,9 +46,9 @@ During the upgrade, if you are only interested in Local PV Storage, you can disa
Downgrades are not supported.
:::

## Upgrade from 3.x to 4.3
## Upgrade from 3.x to 4.4

Follow these steps to upgrade OpenEBS from version 3.x to 4.3:
Follow these steps to upgrade OpenEBS from version 3.x to 4.4:

1. Update the helm repository: The OpenEBS Helm chart repository URL has changed. The repository target URL needs to be updated.

Expand Down Expand Up @@ -83,9 +83,9 @@ helm repo update

:::

## Upgrade from 4.x to 4.3
## Upgrade from 4.x to 4.4

Follow these steps to upgrade OpenEBS from version 4.x to 4.3:
Follow these steps to upgrade OpenEBS from version 4.x to 4.4:

1. Download the `kubectl openebs` binary from the [OpenEBS Release repository](https://github.com/openebs/openebs/releases) on GitHub.

Expand Down
2 changes: 1 addition & 1 deletion docs/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -830,7 +830,7 @@ module.exports = {
{
type: "doc",
id: "releases",
label: "Releases",
label: "Release Notes",
customProps: {
icon: "File"
},
Expand Down
1 change: 1 addition & 0 deletions docs/versions.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
[
"4.4.x",
"4.3.x",
"4.2.x",
"4.1.x",
Expand Down