Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: modified the local engine name for folders, topics to local storage #405

Merged
merged 5 commits into from
Apr 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/i18n/en/docusaurus-plugin-content-docs/current.json
Original file line number Diff line number Diff line change
Expand Up @@ -186,5 +186,9 @@
"sidebar.docs.category.Local PV ZFS": {
"message": "Local PV ZFS",
"description": "The label for category Local PV ZFS in sidebar docs"
},
"sidebar.docs.category.Local Storage User Guide": {
"message": "Local Storage User Guide",
"description": "The label for category Local Storage User Guide in sidebar docs"
}
}
18 changes: 9 additions & 9 deletions docs/main/concepts/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,20 +52,20 @@ The implementation pattern used by data engines to provide high availability is
Using a single controller to implement synchronous replication of data to fixed set of nodes (instead of distribution via multiple metadata controller), reduces the overhead in managing the metadata and also reduces the blast radius related to a node failure and other nodes participating in the rebuild of the failed node.

The OpenEBS volume services layer exposes the volumes as:
- Device or Directory paths in case of Local Engine
- NVMe Target in case of Replicated Engine
- Device or Directory paths in case of Local Storage (a.k.a Local Engine)
- NVMe Target in case of Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor)

### Volume Data Layer

OpenEBS Data Engines create a Volume Replica on top of the storage layer. Volume Replicas are pinned to a node and are created on top of the storage layer. The replica can be any of the following:

- Sub-directory - in case the storage layer used is a filesystem directory
- Full Device or Partitioned Device - in case the storage layer used is block devices
- Logical Volume - in case the storage layer used is a device pool coming from local engine
- Sub-directory - in case the storage layer used is a filesystem directory.
- Full Device or Partitioned Device - in case the storage layer used is block devices.
- Logical Volume - in case the storage layer used is a device pool coming from Local Storage.

In case the applications require only local storage, then the persistent volume will be created using one of the above directories, device (or partition) or logical volume. OpenEBS [control plane](#control-plane) will be used to provision one of the above replicas.

OpenEBS can add the layer of high availability on top of the local storage using the replicated engine. In this case, OpenEBS uses a light-weight storage defined storage controller software that can receive the read/write operations over a network end-point and then be passed on to the underlying storage layer. OpenEBS then uses this Replica network end-points to maintain a synchronous copy of the volume across nodes.
OpenEBS can add the layer of high availability on top of the locally attached storage using the Replicated Storage. In this case, OpenEBS uses a light-weight storage controller software that can receive the read/write operations over a network end-point and then be passed on to the underlying storage layer. OpenEBS then uses this Replica network end-points to maintain a synchronous copy of the volume across nodes.

OpenEBS Volume Replicas typically go through the following states:
- Initializing, during initial provisioning and is being registered to its volume
Expand Down Expand Up @@ -156,7 +156,7 @@ In addition, OpenEBS also has released as alpha version `kubectl plugin` to help

The Kubernetes CSI (provisioning layer) will intercept the requests for the Persistent Volumes and forward the requests to the OpenEBS Control Plane components to service the requests. The information provided in the StorageClass combined with requests from PVCs will determine the right OpenEBS control component to receive the request.

OpenEBS control plane will then process the request and create the Persistent Volumes using the specified local or replicated engines. The data engine services like target and replica are deployed as Kubernetes applications as well. The containers provide storage for the containers. The new containers launched for serving the applications will be available in the `openebs` namespace.
OpenEBS control plane will then process the request and create the Persistent Volumes using the specified Local or Replicated Storage. The data engine services like target and replica are deployed as Kubernetes applications as well. The containers provide storage for the containers. The new containers launched for serving the applications will be available in the `openebs` namespace.

With the magic of OpenEBS and Kubernetes, the volumes should be provisioned, pods scheduled and application ready to serve. For this magic to happen, the prerequisites should be met.

Expand All @@ -165,5 +165,5 @@ Check out our [troubleshooting section](../troubleshooting/) for some of the com
## See Also

- [Data Engines](../concepts/data-engines/data-engines.md)
- [OpenEBS Local Engine](../concepts/data-engines/local-engine.md)
- [OpenEBS Replicated Engine](../concepts/data-engines/replicated-engine.md)
- [OpenEBS Local Storage](../concepts/data-engines/local-storage.md)
- [OpenEBS Replicated Storage](../concepts/data-engines/replicated-engine.md)
8 changes: 4 additions & 4 deletions docs/main/concepts/data-engines/data-engines.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ OpenEBS Data Engines can be classified into two categories.

### Local Storage

OpenEBS Local Storage or Local Engines can create Persistent Volumes (PVs) out of local disks or hostpaths or use the volume managers on the Kubernetes worker nodes. Local Storage are well suited for cloud native applications that have the availability, scalability features built into them. Local Storage are also well suited for stateful workloads that are short lived like Machine Learning (ML) jobs or edge cases where there is a single node Kubernetes cluster.
OpenEBS Local Storage (a.k.a Local Engines) can create Persistent Volumes (PVs) out of local disks or hostpaths or use the volume managers on the Kubernetes worker nodes. Local Storage are well suited for cloud native applications that have the availability, scalability features built into them. Local Storage are also well suited for stateful workloads that are short lived like Machine Learning (ML) jobs or edge cases where there is a single node Kubernetes cluster.

:::note
Local Storage are only available from the the node on which the persistent volume is created. If that node fails, the application pod will not be re-scheduled to another node.
Expand All @@ -126,10 +126,10 @@ The below table identifies few differences among the different OpenEBS Local Sto

### Replicated Storage

Replicated Storage or Replicated Engine (f.k.a Mayastor) are those that can synchronously replicate the data to multiple nodes. These engines provide protection against node failures, by allowing the volume to be accessible from one of the other nodes where the data was replicated to. The replication can also be setup across availability zones helping applications move across availability zones. Replicated Volumes are also capable of enterprise storage features like snapshots, clone, volume expansion, and so forth.
Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor) can synchronously replicate the data to multiple nodes. These engines provide protection against node failures, by allowing the volume to be accessible from one of the other nodes where the data was replicated to. The replication can also be setup across availability zones helping applications move across availability zones. Replicated Volumes are also capable of enterprise storage features like snapshots, clone, volume expansion, and so forth.

:::tip
Depending on the type of storage attached to your Kubernetes worker nodes and application performance requirements, you can select from [Local Storage](local-engine.md) or[Replicated Storage](replicated-engine.md).
Depending on the type of storage attached to your Kubernetes worker nodes and application performance requirements, you can select from [Local Storage](local-storage.md) or [Replicated Storage](replicated-engine.md).
:::

:::note
Expand Down Expand Up @@ -165,5 +165,5 @@ A short summary is provided below.
## See Also

[User Guides](../../user-guides/)
[Local Storage User Guide](../../user-guides/local-engine-user-guide/)
[Local Storage User Guide](../../user-guides/local-storage-user-guide/)
[Replicated Storage User Guide](../../user-guides/replicated-engine-user-guide/)
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
id: localengine
id: localstorage
title: OpenEBS Local Storage
keywords:
- Local Storage
Expand All @@ -9,7 +9,7 @@ description: This document provides you with a brief explanation of OpenEBS Loca

## Local Storage Overview

OpenEBS provides Dynamic PV provisioners for [Kubernetes Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). A Local Storage (aka Local Volume) implies that storage is available only from a single node. A local volume represents a mounted local storage device such as a disk, partition, or directory.
OpenEBS provides Dynamic PV provisioners for [Kubernetes Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). A Local Storage (a.k.a Local Engine) implies that storage is available only from a single node. A local volume represents a locally mounted storage device such as a disk, partition, or directory.

As the local volume is accessible only from a single node, local volumes are subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume will also become inaccessible and a Pod using it will not be able to run. Applications using local volumes must be able to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk.

Expand Down Expand Up @@ -85,17 +85,18 @@ A quick summary of the steps to restore include:
velero restore create rbb-01 --from-backup bbb-01 -l app=test-velero-backup
```

## Limitations (or Roadmap items) of OpenEBS Local Storage
## Limitations of OpenEBS Local Storage

- Size of the Local Storage cannot be increased dynamically.
datacore-gthomas marked this conversation as resolved.
Show resolved Hide resolved
- Disk quotas are not enforced by Local Storage. An underlying device or hostpath can have more data than requested by a PVC or storage class. Enforcing the capacity is a roadmap feature.
- Enforce capacity and PVC resource quotas on the local disks or host paths.
- SMART statistics of the managed disks is also a potential feature in the roadmap.
- OpenEBS Local Storage is not highly available and cannot sustain node failure.
- OpenEBS Local PV Hostpath does not support snapshots and clones.

## See Also

[OpenEBS Architecture](../architecture.md)
[Local Storage Prerequisites](../../user-guides/local-engine-user-guide/prerequisites.mdx)
[Local PV Hostpath Prerequisites](../../user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md)
[Local PV LVM Prerequisites](../../user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md)
[Local PV ZFS Prerquisites](../../user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md)
[Installation](../../quickstart-guide/installation.md)
[Local Storage User Guide](../../user-guides/local-engine-user-guide/)
[Local Storage User Guide](../../user-guides/local-storage-user-guide/)

17 changes: 10 additions & 7 deletions docs/main/faqs/faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ The default retention is the same used by K8s. For dynamically provisioned Persi

### Is OpenShift supported? {#openebs-in-openshift}

Yes. See the [detailed installation instructions for OpenShift](../user-guides/local-engine-user-guide/additional-information/kb.md#how-to-install-openebs-in-openshift-4x-openshift-install) for more information.
Yes. See the [detailed installation instructions for OpenShift](../user-guides/local-storage-user-guide/additional-information/kb.md#how-to-install-openebs-in-openshift-4x-openshift-install) for more information.

[Go to top](#top)

Expand All @@ -58,13 +58,16 @@ While creating a StorageClass, if user mention replica count as 2 in a single no

### How backup and restore is working with OpenEBS volumes? {#backup-restore-openebs-volumes}

OpenEBS (provide snapshots and restore links to all 3 engines - Internal reference)
Refer to the following links for more information on the backup and restore functionality with the OpenEBS volumes:
- [Backup and Restore](../user-guides/local-storage-user-guide/additional-information/backupandrestore.md)
- [Snapshot](../user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md)
- [Backup and Restore for Local PV ZFS Volumes](../user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md)

[Go to top](#top)

### How is data protected in replicated storage? What happens when a host, client workload, or a data center fails?
### How is data protected in Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor)? What happens when a host, client workload, or a data center fails?

The OpenEBS replicated storage ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure.
The OpenEBS Replicated Storage ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure.
Faulted replicas are automatically rebuilt in the background without IO disruption to maintain the replication factor.

[Go to top](#top)
Expand Down Expand Up @@ -111,7 +114,7 @@ It is recommended to use unpartitioned raw block devices for best results.

### How does it help to keep my data safe?

Replicated storage engine supports synchronous mirroring to enhance the durability of data at rest within whatever physical persistence layer is in use. When volumes are provisioned which are configured for replication \(a user can control the count of active replicas which should be maintained, on a per StorageClass basis\), write I/O operations issued by an application to that volume are amplified by its controller ("nexus") and dispatched to all its active replicas. Only if every replica completes the write successfully on its own underlying block device will the I/O completion be acknowledged to the controller. Otherwise, the I/O is failed and the caller must make its own decision as to whether it should be retried. If a replica is determined to have faulted \(I/O cannot be serviced within the configured timeout period, or not without error\), the control plane will automatically take corrective action and remove it from the volume. If spare capacity is available within a replicated engine pool, a new replica will be created as a replacement and automatically brought into synchronisation with the existing replicas. The data path for a replicated volume is described in more detail [here](../user-guides/replicated-engine-user-guide/additional-information/i-o-path-description.md#replicated-volume-io-path)
Replicated Storage engine supports synchronous mirroring to enhance the durability of data at rest within whatever physical persistence layer is in use. When volumes are provisioned which are configured for replication \(a user can control the count of active replicas which should be maintained, on a per StorageClass basis\), write I/O operations issued by an application to that volume are amplified by its controller ("nexus") and dispatched to all its active replicas. Only if every replica completes the write successfully on its own underlying block device will the I/O completion be acknowledged to the controller. Otherwise, the I/O is failed and the caller must make its own decision as to whether it should be retried. If a replica is determined to have faulted \(I/O cannot be serviced within the configured timeout period, or not without error\), the control plane will automatically take corrective action and remove it from the volume. If spare capacity is available within a replicated engine pool, a new replica will be created as a replacement and automatically brought into synchronisation with the existing replicas. The data path for a replicated volume is described in more detail [here](../user-guides/replicated-engine-user-guide/additional-information/i-o-path-description.md#replicated-volume-io-path)

[Go to top](#top)

Expand Down Expand Up @@ -144,7 +147,7 @@ Since the replicas \(data copies\) of replicated volumes are held entirely withi

### Can the size / capacity of a Disk Pool be changed?

The size of a replicated storage pool is fixed at the time of creation and is immutable. A single pool may have only one block device as a member. These constraints may be removed in later versions.
The size of a Replicated Storage pool is fixed at the time of creation and is immutable. A single pool may have only one block device as a member. These constraints may be removed in later versions.

### How can I ensure that replicas aren't scheduled onto the same node? How about onto nodes in the same rack / availability zone?

Expand Down Expand Up @@ -172,7 +175,7 @@ Replicated engine does not peform asynchronous replication.

### Does replicated engine support RAID?

Replicated storage pools do not implement any form of RAID, erasure coding or striping. If higher levels of data redundancy are required, replicated volumes can be provisioned with a replication factor of greater than one, which will result in synchronously mirrored copies of their data being stored in multiple Disk Pools across multiple Storage Nodes. If the block device on which a Disk Pool is created is actually a logical unit backed by its own RAID implementation \(e.g. a Fibre Channel attached LUN from an external SAN\) it can still be used within a replicated disk pool whilst providing protection against physical disk device failures.
Replicated Storage pools do not implement any form of RAID, erasure coding or striping. If higher levels of data redundancy are required, replicated volumes can be provisioned with a replication factor of greater than one, which will result in synchronously mirrored copies of their data being stored in multiple Disk Pools across multiple Storage Nodes. If the block device on which a Disk Pool is created is actually a logical unit backed by its own RAID implementation \(e.g. a Fibre Channel attached LUN from an external SAN\) it can still be used within a replicated disk pool whilst providing protection against physical disk device failures.

[Go to top](#top)

Expand Down
Loading
Loading