Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: OpenEBS 4.2 Documentation Release Branch with the Latest 4.2 Release Notes #521

Merged
merged 5 commits into from
Feb 18, 2025
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/main/quickstart-guide/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ helm ls -n openebs

```
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
openebs openebs 1 2024-03-25 09:13:00.903321318 +0000 UTC deployed openebs-4.1.0 4.1.0
openebs openebs 1 2024-03-25 09:13:00.903321318 +0000 UTC deployed openebs-4.2.0 4.2.0
```

As a next step [verify](#verifying-openebs-installation) your installation and do the [post-installation](#post-installation-considerations) steps.
Expand Down
126 changes: 96 additions & 30 deletions docs/main/releases.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,21 @@ keywords:
description: This page contains list of supported OpenEBS releases.
---

**Release Date: 08 July 2024**
**Release Date: 12 February 2025**

OpenEBS is a collection of data engines and operators to create different types of replicated and local persistent volumes for Kubernetes Stateful workloads. Kubernetes volumes can be provisioned via CSI Drivers or using Out-of-tree Provisioners.
The status of the various components as of v4.1.1 are as follows:
The status of the various components as of v4.2.0 are as follows:

- Local Storage (a.k.a Local Engine)
- [Local PV Hostpath 4.1.1](https://github.com/openebs/dynamic-localpv-provisioner) (stable)
- [Local PV LVM 1.6.1](https://github.com/openebs/lvm-localpv) (stable)
- [Local PV ZFS 2.6.2](https://github.com/openebs/zfs-localpv) (stable)
- [Local PV Hostpath 4.2.0](https://github.com/openebs/dynamic-localpv-provisioner) (stable)
- [Local PV LVM 1.6.2](https://github.com/openebs/lvm-localpv) (stable)
- [Local PV ZFS 2.7.1](https://github.com/openebs/zfs-localpv) (stable)

- Replicated Storage (a.k.a Replicated Engine)
- [Replicated PV Mayastor 2.7.1](https://github.com/openebs/mayastor) (stable)
- [Replicated PV Mayastor 2.8.0](https://github.com/openebs/mayastor) (stable)

- Out-of-tree (External Storage) Provisioners
- [Local PV Hostpath 4.1.1](https://github.com/openebs/dynamic-localpv-provisioner) (stable)
- [Local PV Hostpath 4.2.0](https://github.com/openebs/dynamic-localpv-provisioner) (stable)

- Other Components
- [CLI 0.6.0](https://github.com/openebs/openebsctl) (beta)
Expand All @@ -32,62 +32,128 @@ The status of the various components as of v4.1.1 are as follows:

OpenEBS is delighted to introduce the following new features:

### What’s New - Local Storage

- **Configurable Quota Options for ZFS Volumes**

You can now select between using `refquota` and `quota` for ZFS volumes, providing greater flexibility in managing resource limits.

- **Enhanced Compression Support with `zstd-fast` Algorithm**

Support for the `zstd-fast` compression algorithm has been introduced, offering improved performance when compression is enabled on ZFS volumes.

- **Merged CAS Config from PVC in Local PV Provisioner**

Enables merging CAS configuration from PersistentVolumeClaim to improve flexibility in volume provisioning.

- **Analytics ID and KEY Environment Variables**

Introduces support for specifying analytics ID and KEY as environment variables in the provisioner deployment.

- **Eviction Tolerations to the Provisioner Deployment**

Allows the provisioner deployment to tolerate eviction conditions, enhancing stability in resource-constrained environments.

- **Support for Incremental Builds and Helm charts in CI**

Added support for incremental builds and added Helm chart.

### What’s New - Replicated Storage

- **Snapshot across Multiple Replicas**
- **NVMeoF-RDMA Support for Replicated PV Mayastor Volume Targets**

Replicated PV Mayastor has enhanced its snapshot capabilities to ensure file-system consistency across multiple replicas before taking snapshots. This ensures that snapshots are consistent and reliable across multiple replicas.
Replicated PV Mayastor volume targets can now be shared over RDMA transport, allowing application hosts to achieve high throughput and reduced latency. This feature is enabled via a Helm chart option, which must be used alongside the existing network interface name to provide an RDMA-capable interface name. This enables NVMe hosts to leverage high-performance RDMA network infrastructure when communicating with storage targets.

- **Restore across Multiple Replicas**
- **CSAL FTL bdev Support**

The capability to restore from snapshots across multiple replicas has been introduced in recent releases, enhancing data recovery options​.
SPDK FTL bdev (Cloud Storage Acceleration Layer - CSAL) support is now available, enabling the creation of layered devices with a fast cache device for buffering writes, which are eventually flushed sequentially to a base device. This allows the use of emerging storage interfaces such as Zoned Namespace (ZNS) and Flexible Data Placement (FDP)-capable NVMe devices.

- **Expansion of Volumes with Snapshots**
- **Persistent Store Transaction API in IO-Engine**

This release includes support for volume expansion even when snapshots are present.
Introduces a persistent store transaction API to improve data consistency and reliability.

- **Placement of Replica Volumes across different Nodes/Pools**
- **Allowed HA Node to Listen on IPv6 Pod IPs**

Replicated PV Mayastor now uses topology parameters defined in the storage class to determine the placement of volume replicas. This allows replicas to be controlled via labels from the storage class.
Adds support for HA nodes to listen on IPv6 Pod IPs.

- **Grafana Dashboards**
- **Made CSI Driver Operations Asynchronous**

Grafana Dashboards for Replicated PV Mayastor has been added in this releases.
Converts mount, unmount, and NVMe operations to use spawn_blocking. It also removes `async_stream` for gRPC over UDS in controller and node.

- **Eviction Tolerations**

Added eviction tolerations to the DSP operator deployment and CSI controller, updated LocalPV provisioner chart to 4.2, and renamed `tolerations_with_early_eviction` to `_tolerations_with_early_eviction_two` to avoid conflicts with the `LocalPV-provisioner _helpers.tpl` function.

## Fixes

### Fixed Issues - Local Storage

- **Metrics Collection Loop**
- **Environment Variable Handling**

This fix ensures the environment variable setting to disable event analytics reporting is properly honored.

- **Volume Provisioning Error for Existing ZFS volumes**

Adds an anonymous metrics collection loop which periodically pushes OpenEBS usage metrics. ([#188](https://github.com/openebs/dynamic-localpv-provisioner/pull/188),[#318](https://github.com/openebs/lvm-localpv/pull/318), and[#548](https://github.com/openebs/zfs-localpv/pull/548))
This fix ensures that if a ZFS volume already exists, the controller will provision the volume without error.

- **Indentation Issues in VolumeSnapshot CRDs**

VolumeSnapshot CRDs now have proper indentation formatting.

- **Introduced Per-Volume Mutex**

A per-volume mutex was introduced to prevent simultaneous CSI controller calls that might cause the volume CR to be inadvertently deleted.

- **Reservation Logic Bug during Volume Expansion**

A bug in the reservation logic during volume expansion (with refquota settings) has been resolved.

- **Removed Caching**

Removed caching for the openebs-ndm dependency to ensure fresh builds.

- **Fixed Trigger for `build_and_push` Workflow in CI**

Corrected the trigger configuration for the `build_and_push` workflow to ensure proper execution.

### Fixed Issues - Replicated Storage

- **Plugin changes for Snapshot Operation**
- **Prevent Persistence of Faulty Child during Nexus Creation**

Fixed an issue where a child faulting before the nexus is open would be persisted as unhealthy, preventing future volume attachment.

- **Propagate Child I/O Error for Split I/O in SPDK**

Ensures proper error propagation when a child encounters an I/O error during split I/O operations.

- **Use Transport Info from NVMe Connect Response**

Fixed an issue where transport information from the NVMe connect response was not being used correctly.

- **Fixed Regression Causing Pool Creation Timeout Retry Issues**

This plugin will give detailed information about volume snapshot operation. ([#500](https://github.com/openebs/mayastor-extensions/pull/500))
Fixed a regression where pool creation retries were not handled properly due to timeout issues.

- **Deserialize Failures with Helm v3.13+ Installation**
- **Handle Devices for Existing Subsystems in CSI Node**

With Helm v3.13 or higher, helm chart values deserialize fails when loki-stack or jaeger-operator are disabled. This modification includes default deserialize options, which enable the essential options even when the dependent charts are disabled. ([#512](https://github.com/openebs/mayastor-extensions/pull/512))
This fix ensures proper handling of devices when dealing with existing subsystems.

- **Scale of Volume**
- **Use Auto-Detected Sector Size for Block Devices**

Earlier, the scale of volume was not allowed when the volume already has a snapshot. Now, Scale volume with snapshot can be used for replica rebuild. ([#826](https://github.com/openebs/mayastor-control-plane/pull/826))
Automatically detects and applies the correct sector size for block devices, improving compatibility and performance.

## Watch Items and Known Issues
## Known Issues

### Watch Items and Known Issues - Local Storage
### Known Issues - Local Storage

Local PV ZFS / Local PV LVM on a single worker node encounters issues after upgrading to the latest versions. The issue is specifically associated with the change of the controller manifest to a Deployment type, which results in the failure of new controller pods to join the Running state. The issue appears to be due to the affinity rules set in the old pod, which are not present in the new pods. As a result, since both the old and new pods have relevant labels, the scheduler cannot place the new pod on the same node, leading to scheduling failures when there's only a single node.
The workaround is to delete the old pod so the new pod can get scheduled. Refer to the issue [#3741](https://github.com/openebs/openebs/issues/3751) for more details.

### Watch Items and Known Issues - Replicated Storage
### Known Issues - Replicated Storage

- When a pod-based workload is scheduled on a node that reboots, and the pod lacks a controller, the volume unpublish operation is not triggered. This causes the control plane to incorrectly assume the volume is published, even though it is not mounted. As a result, FIFREEZE fails during a snapshot operation, preventing the snapshot from being taken. To resolve this, reinstate or recreate the pod to ensure the volume is properly mounted.

- Replicated PV Mayastor does not support the capacity expansion of DiskPools as of v2.7.0.
- Replicated PV Mayastor does not support the capacity expansion of DiskPools as of v2.8.0.

- The IO engine pod has been observed to restart occasionally in response to heavy IO and the constant scaling up and down of volume replicas.

Expand All @@ -101,7 +167,7 @@ The workaround is to delete the old pod so the new pod can get scheduled. Refer

## Related Information

OpenEBS Release notes are maintained in the GitHub repositories alongside the code and releases. For summary of what changes across all components in each release and to view the full Release Notes, see [OpenEBS Release 4.1](https://github.com/openebs/openebs/releases/tag/v4.1.1).
OpenEBS Release notes are maintained in the GitHub repositories alongside the code and releases. For summary of what changes across all components in each release and to view the full Release Notes, see [OpenEBS Release 4.2.0](https://github.com/openebs/openebs/releases/tag/v4.2.0).

See version specific Releases to view the legacy OpenEBS Releases.

Expand Down
12 changes: 6 additions & 6 deletions docs/main/user-guides/upgrades.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ The `--reuse-values` option should not be used with `helm upgrade`, as it may ca

2. Verify that the CRDs, Volumes, Snapshots, and StoragePools are not affected by the upgrade process.

3. Start the Replicated Storage upgrade process by using the kubectl mayastor plugin v2.7.1.
3. Start the Replicated Storage upgrade process by using the kubectl mayastor plugin v2.8.0.

```
kubectl mayastor upgrade -n openebs --set 'mayastor.agents.core.rebuild.partial.enabled=false'
Expand All @@ -156,15 +156,15 @@ kubectl mayastor upgrade -n openebs --set 'mayastor.agents.core.rebuild.partial.
kubectl get jobs -n openebs

NAME COMPLETIONS DURATION AGE
openebs-upgrade-v2-7-1 1/1 4m49s 6m11s
openebs-upgrade-v2-8-0 1/1 4m49s 6m11s
```

- Wait for the upgrade job to complete.

```
kubectl get pods -n openebs

openebs-upgrade-v2-7-1-s58xl 0/1 Completed 0 7m4s
openebs-upgrade-v2-8-0-s58xl 0/1 Completed 0 7m4s
```

4. Once the upgrade process is completed, all the volumes and pools should be online.
Expand All @@ -188,7 +188,7 @@ helm upgrade openebs openebs/openebs -n openebs -f old-values.yaml --version 4.2

2. Verify that the CRDs, Volumes, Snapshots and StoragePools are unaffected by the upgrade process.

3. Start the Replicated Storage upgrade process by using the kubectl mayastor plugin v2.7.1.
3. Start the Replicated Storage upgrade process by using the kubectl mayastor plugin v2.8.0.

```
kubectl mayastor upgrade -n openebs
Expand All @@ -200,15 +200,15 @@ kubectl mayastor upgrade -n openebs
kubectl get jobs -n openebs

NAME COMPLETIONS DURATION AGE
openebs-upgrade-v2-7-1 1/1 4m49s 6m11s
openebs-upgrade-v2-8-0 1/1 4m49s 6m11s
```

- Wait for the upgrade job to complete.

```
kubectl get pods -n openebs

openebs-upgrade-v2-7-1-s58xl 0/1 Completed 0 7m4s
openebs-upgrade-v2-8-0-s58xl 0/1 Completed 0 7m4s
```

4. Once the upgrade process is completed, all the volumes and pools should be online.
Expand Down
Loading