diff --git a/docs/i18n/en/docusaurus-plugin-content-docs/current.json b/docs/i18n/en/docusaurus-plugin-content-docs/current.json index f36fc3dd0..5c68fc4ec 100644 --- a/docs/i18n/en/docusaurus-plugin-content-docs/current.json +++ b/docs/i18n/en/docusaurus-plugin-content-docs/current.json @@ -186,5 +186,9 @@ "sidebar.docs.category.Local PV ZFS": { "message": "Local PV ZFS", "description": "The label for category Local PV ZFS in sidebar docs" + }, + "sidebar.docs.category.Local Storage User Guide": { + "message": "Local Storage User Guide", + "description": "The label for category Local Storage User Guide in sidebar docs" } } \ No newline at end of file diff --git a/docs/main/concepts/architecture.md b/docs/main/concepts/architecture.md index 899bd8cb6..424e4d022 100644 --- a/docs/main/concepts/architecture.md +++ b/docs/main/concepts/architecture.md @@ -52,20 +52,20 @@ The implementation pattern used by data engines to provide high availability is Using a single controller to implement synchronous replication of data to fixed set of nodes (instead of distribution via multiple metadata controller), reduces the overhead in managing the metadata and also reduces the blast radius related to a node failure and other nodes participating in the rebuild of the failed node. The OpenEBS volume services layer exposes the volumes as: -- Device or Directory paths in case of Local Engine -- NVMe Target in case of Replicated Engine +- Device or Directory paths in case of Local Storage (a.k.a Local Engine) +- NVMe Target in case of Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor) ### Volume Data Layer OpenEBS Data Engines create a Volume Replica on top of the storage layer. Volume Replicas are pinned to a node and are created on top of the storage layer. The replica can be any of the following: -- Sub-directory - in case the storage layer used is a filesystem directory -- Full Device or Partitioned Device - in case the storage layer used is block devices -- Logical Volume - in case the storage layer used is a device pool coming from local engine +- Sub-directory - in case the storage layer used is a filesystem directory. +- Full Device or Partitioned Device - in case the storage layer used is block devices. +- Logical Volume - in case the storage layer used is a device pool coming from Local Storage. In case the applications require only local storage, then the persistent volume will be created using one of the above directories, device (or partition) or logical volume. OpenEBS [control plane](#control-plane) will be used to provision one of the above replicas. -OpenEBS can add the layer of high availability on top of the local storage using the replicated engine. In this case, OpenEBS uses a light-weight storage defined storage controller software that can receive the read/write operations over a network end-point and then be passed on to the underlying storage layer. OpenEBS then uses this Replica network end-points to maintain a synchronous copy of the volume across nodes. +OpenEBS can add the layer of high availability on top of the locally attached storage using the Replicated Storage. In this case, OpenEBS uses a light-weight storage controller software that can receive the read/write operations over a network end-point and then be passed on to the underlying storage layer. OpenEBS then uses this Replica network end-points to maintain a synchronous copy of the volume across nodes. OpenEBS Volume Replicas typically go through the following states: - Initializing, during initial provisioning and is being registered to its volume @@ -156,7 +156,7 @@ In addition, OpenEBS also has released as alpha version `kubectl plugin` to help The Kubernetes CSI (provisioning layer) will intercept the requests for the Persistent Volumes and forward the requests to the OpenEBS Control Plane components to service the requests. The information provided in the StorageClass combined with requests from PVCs will determine the right OpenEBS control component to receive the request. -OpenEBS control plane will then process the request and create the Persistent Volumes using the specified local or replicated engines. The data engine services like target and replica are deployed as Kubernetes applications as well. The containers provide storage for the containers. The new containers launched for serving the applications will be available in the `openebs` namespace. +OpenEBS control plane will then process the request and create the Persistent Volumes using the specified Local or Replicated Storage. The data engine services like target and replica are deployed as Kubernetes applications as well. The containers provide storage for the containers. The new containers launched for serving the applications will be available in the `openebs` namespace. With the magic of OpenEBS and Kubernetes, the volumes should be provisioned, pods scheduled and application ready to serve. For this magic to happen, the prerequisites should be met. @@ -165,5 +165,5 @@ Check out our [troubleshooting section](../troubleshooting/) for some of the com ## See Also - [Data Engines](../concepts/data-engines/data-engines.md) -- [OpenEBS Local Engine](../concepts/data-engines/local-engine.md) -- [OpenEBS Replicated Engine](../concepts/data-engines/replicated-engine.md) +- [OpenEBS Local Storage](../concepts/data-engines/local-storage.md) +- [OpenEBS Replicated Storage](../concepts/data-engines/replicated-engine.md) diff --git a/docs/main/concepts/data-engines/data-engines.md b/docs/main/concepts/data-engines/data-engines.md index 452e90863..813dc457c 100644 --- a/docs/main/concepts/data-engines/data-engines.md +++ b/docs/main/concepts/data-engines/data-engines.md @@ -102,7 +102,7 @@ OpenEBS Data Engines can be classified into two categories. ### Local Storage -OpenEBS Local Storage or Local Engines can create Persistent Volumes (PVs) out of local disks or hostpaths or use the volume managers on the Kubernetes worker nodes. Local Storage are well suited for cloud native applications that have the availability, scalability features built into them. Local Storage are also well suited for stateful workloads that are short lived like Machine Learning (ML) jobs or edge cases where there is a single node Kubernetes cluster. +OpenEBS Local Storage (a.k.a Local Engines) can create Persistent Volumes (PVs) out of local disks or hostpaths or use the volume managers on the Kubernetes worker nodes. Local Storage are well suited for cloud native applications that have the availability, scalability features built into them. Local Storage are also well suited for stateful workloads that are short lived like Machine Learning (ML) jobs or edge cases where there is a single node Kubernetes cluster. :::note Local Storage are only available from the the node on which the persistent volume is created. If that node fails, the application pod will not be re-scheduled to another node. @@ -126,10 +126,10 @@ The below table identifies few differences among the different OpenEBS Local Sto ### Replicated Storage -Replicated Storage or Replicated Engine (f.k.a Mayastor) are those that can synchronously replicate the data to multiple nodes. These engines provide protection against node failures, by allowing the volume to be accessible from one of the other nodes where the data was replicated to. The replication can also be setup across availability zones helping applications move across availability zones. Replicated Volumes are also capable of enterprise storage features like snapshots, clone, volume expansion, and so forth. +Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor) can synchronously replicate the data to multiple nodes. These engines provide protection against node failures, by allowing the volume to be accessible from one of the other nodes where the data was replicated to. The replication can also be setup across availability zones helping applications move across availability zones. Replicated Volumes are also capable of enterprise storage features like snapshots, clone, volume expansion, and so forth. :::tip -Depending on the type of storage attached to your Kubernetes worker nodes and application performance requirements, you can select from [Local Storage](local-engine.md) or[Replicated Storage](replicated-engine.md). +Depending on the type of storage attached to your Kubernetes worker nodes and application performance requirements, you can select from [Local Storage](local-storage.md) or [Replicated Storage](replicated-engine.md). ::: :::note @@ -165,5 +165,5 @@ A short summary is provided below. ## See Also [User Guides](../../user-guides/) -[Local Storage User Guide](../../user-guides/local-engine-user-guide/) +[Local Storage User Guide](../../user-guides/local-storage-user-guide/) [Replicated Storage User Guide](../../user-guides/replicated-engine-user-guide/) diff --git a/docs/main/concepts/data-engines/local-engine.md b/docs/main/concepts/data-engines/local-storage.md similarity index 87% rename from docs/main/concepts/data-engines/local-engine.md rename to docs/main/concepts/data-engines/local-storage.md index 2c1408469..3dd73c16b 100644 --- a/docs/main/concepts/data-engines/local-engine.md +++ b/docs/main/concepts/data-engines/local-storage.md @@ -1,5 +1,5 @@ --- -id: localengine +id: localstorage title: OpenEBS Local Storage keywords: - Local Storage @@ -9,7 +9,7 @@ description: This document provides you with a brief explanation of OpenEBS Loca ## Local Storage Overview -OpenEBS provides Dynamic PV provisioners for [Kubernetes Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). A Local Storage (aka Local Volume) implies that storage is available only from a single node. A local volume represents a mounted local storage device such as a disk, partition, or directory. +OpenEBS provides Dynamic PV provisioners for [Kubernetes Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). A Local Storage (a.k.a Local Engine) implies that storage is available only from a single node. A local volume represents a locally mounted storage device such as a disk, partition, or directory. As the local volume is accessible only from a single node, local volumes are subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume will also become inaccessible and a Pod using it will not be able to run. Applications using local volumes must be able to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk. @@ -85,17 +85,18 @@ A quick summary of the steps to restore include: velero restore create rbb-01 --from-backup bbb-01 -l app=test-velero-backup ``` -## Limitations (or Roadmap items) of OpenEBS Local Storage +## Limitations of OpenEBS Local Storage - Size of the Local Storage cannot be increased dynamically. -- Disk quotas are not enforced by Local Storage. An underlying device or hostpath can have more data than requested by a PVC or storage class. Enforcing the capacity is a roadmap feature. -- Enforce capacity and PVC resource quotas on the local disks or host paths. -- SMART statistics of the managed disks is also a potential feature in the roadmap. +- OpenEBS Local Storage is not highly available and cannot sustain node failure. +- OpenEBS Local PV Hostpath does not support snapshots and clones. ## See Also [OpenEBS Architecture](../architecture.md) -[Local Storage Prerequisites](../../user-guides/local-engine-user-guide/prerequisites.mdx) +[Local PV Hostpath Prerequisites](../../user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md) +[Local PV LVM Prerequisites](../../user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md) +[Local PV ZFS Prerquisites](../../user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md) [Installation](../../quickstart-guide/installation.md) -[Local Storage User Guide](../../user-guides/local-engine-user-guide/) +[Local Storage User Guide](../../user-guides/local-storage-user-guide/) diff --git a/docs/main/faqs/faqs.md b/docs/main/faqs/faqs.md index 01090330d..a4f896a5f 100644 --- a/docs/main/faqs/faqs.md +++ b/docs/main/faqs/faqs.md @@ -46,7 +46,7 @@ The default retention is the same used by K8s. For dynamically provisioned Persi ### Is OpenShift supported? {#openebs-in-openshift} -Yes. See the [detailed installation instructions for OpenShift](../user-guides/local-engine-user-guide/additional-information/kb.md#how-to-install-openebs-in-openshift-4x-openshift-install) for more information. +Yes. See the [detailed installation instructions for OpenShift](../user-guides/local-storage-user-guide/additional-information/kb.md#how-to-install-openebs-in-openshift-4x-openshift-install) for more information. [Go to top](#top) @@ -58,13 +58,16 @@ While creating a StorageClass, if user mention replica count as 2 in a single no ### How backup and restore is working with OpenEBS volumes? {#backup-restore-openebs-volumes} -OpenEBS (provide snapshots and restore links to all 3 engines - Internal reference) +Refer to the following links for more information on the backup and restore functionality with the OpenEBS volumes: +- [Backup and Restore](../user-guides/local-storage-user-guide/additional-information/backupandrestore.md) +- [Snapshot](../user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md) +- [Backup and Restore for Local PV ZFS Volumes](../user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md) [Go to top](#top) -### How is data protected in replicated storage? What happens when a host, client workload, or a data center fails? +### How is data protected in Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor)? What happens when a host, client workload, or a data center fails? -The OpenEBS replicated storage ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure. +The OpenEBS Replicated Storage ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure. Faulted replicas are automatically rebuilt in the background without IO disruption to maintain the replication factor. [Go to top](#top) @@ -111,7 +114,7 @@ It is recommended to use unpartitioned raw block devices for best results. ### How does it help to keep my data safe? -Replicated storage engine supports synchronous mirroring to enhance the durability of data at rest within whatever physical persistence layer is in use. When volumes are provisioned which are configured for replication \(a user can control the count of active replicas which should be maintained, on a per StorageClass basis\), write I/O operations issued by an application to that volume are amplified by its controller ("nexus") and dispatched to all its active replicas. Only if every replica completes the write successfully on its own underlying block device will the I/O completion be acknowledged to the controller. Otherwise, the I/O is failed and the caller must make its own decision as to whether it should be retried. If a replica is determined to have faulted \(I/O cannot be serviced within the configured timeout period, or not without error\), the control plane will automatically take corrective action and remove it from the volume. If spare capacity is available within a replicated engine pool, a new replica will be created as a replacement and automatically brought into synchronisation with the existing replicas. The data path for a replicated volume is described in more detail [here](../user-guides/replicated-engine-user-guide/additional-information/i-o-path-description.md#replicated-volume-io-path) +Replicated Storage engine supports synchronous mirroring to enhance the durability of data at rest within whatever physical persistence layer is in use. When volumes are provisioned which are configured for replication \(a user can control the count of active replicas which should be maintained, on a per StorageClass basis\), write I/O operations issued by an application to that volume are amplified by its controller ("nexus") and dispatched to all its active replicas. Only if every replica completes the write successfully on its own underlying block device will the I/O completion be acknowledged to the controller. Otherwise, the I/O is failed and the caller must make its own decision as to whether it should be retried. If a replica is determined to have faulted \(I/O cannot be serviced within the configured timeout period, or not without error\), the control plane will automatically take corrective action and remove it from the volume. If spare capacity is available within a replicated engine pool, a new replica will be created as a replacement and automatically brought into synchronisation with the existing replicas. The data path for a replicated volume is described in more detail [here](../user-guides/replicated-engine-user-guide/additional-information/i-o-path-description.md#replicated-volume-io-path) [Go to top](#top) @@ -144,7 +147,7 @@ Since the replicas \(data copies\) of replicated volumes are held entirely withi ### Can the size / capacity of a Disk Pool be changed? -The size of a replicated storage pool is fixed at the time of creation and is immutable. A single pool may have only one block device as a member. These constraints may be removed in later versions. +The size of a Replicated Storage pool is fixed at the time of creation and is immutable. A single pool may have only one block device as a member. These constraints may be removed in later versions. ### How can I ensure that replicas aren't scheduled onto the same node? How about onto nodes in the same rack / availability zone? @@ -172,7 +175,7 @@ Replicated engine does not peform asynchronous replication. ### Does replicated engine support RAID? -Replicated storage pools do not implement any form of RAID, erasure coding or striping. If higher levels of data redundancy are required, replicated volumes can be provisioned with a replication factor of greater than one, which will result in synchronously mirrored copies of their data being stored in multiple Disk Pools across multiple Storage Nodes. If the block device on which a Disk Pool is created is actually a logical unit backed by its own RAID implementation \(e.g. a Fibre Channel attached LUN from an external SAN\) it can still be used within a replicated disk pool whilst providing protection against physical disk device failures. +Replicated Storage pools do not implement any form of RAID, erasure coding or striping. If higher levels of data redundancy are required, replicated volumes can be provisioned with a replication factor of greater than one, which will result in synchronously mirrored copies of their data being stored in multiple Disk Pools across multiple Storage Nodes. If the block device on which a Disk Pool is created is actually a logical unit backed by its own RAID implementation \(e.g. a Fibre Channel attached LUN from an external SAN\) it can still be used within a replicated disk pool whilst providing protection against physical disk device failures. [Go to top](#top) diff --git a/docs/main/introduction-to-openebs/benefits.mdx b/docs/main/introduction-to-openebs/benefits.mdx index 89799cfd5..e85adad40 100644 --- a/docs/main/introduction-to-openebs/benefits.mdx +++ b/docs/main/introduction-to-openebs/benefits.mdx @@ -41,7 +41,7 @@ Some key aspects that make OpenEBS different compared to other traditional stora - Built using the micro-services architecture like the applications it serves. OpenEBS is itself deployed as a set of containers on Kubernetes worker nodes. Uses Kubernetes itself to orchestrate and manage OpenEBS components. - Built completely in userspace making it highly portable to run across any OS/platform. - Completely intent-driven, inheriting the same principles that drive the ease of use with Kubernetes. -- OpenEBS supports a range of storage engines so that developers can deploy the storage technology appropriate to their application design objectives. Distributed applications like Cassandra can use a local engine for lowest latency writes. Monolithic applications like MongoDB and PostgreSQL can use replicated storage for resilience. Streaming applications like Kafka can use the Replicated Storage for best performance in edge environments or, again, a Local Storage option. +- OpenEBS supports a range of storage engines so that developers can deploy the storage technology appropriate to their application design objectives. Distributed applications like Cassandra can use a Local Storage for lowest latency writes. Monolithic applications like MongoDB and PostgreSQL can use Replicated Storage for resilience. Streaming applications like Kafka can use the Replicated Storage for best performance in edge environments or, again, a Local Storage option. ### Avoid Cloud Lock-in @@ -122,5 +122,5 @@ Some key aspects that make OpenEBS different compared to other traditional stora - [Use Cases and Examples](use-cases-and-examples.mdx) - [OpenEBS Features](features.mdx) - [OpenEBS Architecture](../concepts/architecture.md) -- [OpenEBS Local Storage](../concepts/data-engines/local-engine.md) +- [OpenEBS Local Storage](../concepts/data-engines/local-storage.md) - [OpenEBS Replicated Storage](../concepts/data-engines/replicated-engine.md) diff --git a/docs/main/introduction-to-openebs/features.mdx b/docs/main/introduction-to-openebs/features.mdx index 9c3dd3022..9c7deac6a 100644 --- a/docs/main/introduction-to-openebs/features.mdx +++ b/docs/main/introduction-to-openebs/features.mdx @@ -43,7 +43,7 @@ OpenEBS Features, like any storage solution, can be broadly classified into foll

- Synchronous Replication is an optional and popular feature of OpenEBS. When used with the replicated engine, OpenEBS can synchronously replicate the data volumes for high availability. The replication happens across Kubernetes zones resulting in high availability for cross AZ setups. This feature is especially useful to build highly available stateful applications using local disks on cloud providers services such as GKE, EKS and AKS. + Synchronous Replication is an optional and popular feature of OpenEBS. When used with the Replicated Storage, OpenEBS can synchronously replicate the data volumes for high availability. The replication happens across Kubernetes zones resulting in high availability for cross AZ setups. This feature is especially useful to build highly available stateful applications using local disks on cloud providers services such as GKE, EKS and AKS.

![Synchronous Replication Icon](../assets/f-replication.svg) @@ -54,7 +54,7 @@ OpenEBS Features, like any storage solution, can be broadly classified into foll

- Copy-on-write snapshots are another optional and popular feature of OpenEBS. When using the replicated engine, snapshots are created instantaneously and there is no limit on the number of snapshots. The incremental snapshot capability enhances data migration and portability across Kubernetes clusters and across different cloud providers or data centers. Operations on snapshots and clones are performed in completely Kubernetes native method using the standard kubectl commands. Common use cases include efficient replication for back-ups and the use of clones for troubleshooting or development against a read only copy of data. + Copy-on-write snapshots are another optional and popular feature of OpenEBS. When using the Replicated Storage, snapshots are created instantaneously. The incremental snapshot capability enhances data migration and portability across Kubernetes clusters and across different cloud providers or data centers. Operations on snapshots and clones are performed in completely Kubernetes native method using the standard kubectl commands. Common use cases include efficient replication for backups and the use of clones for troubleshooting or development against a read-only copy of data.

![Snapshots and Clones Icon](../assets/f-snapshots.svg) @@ -88,5 +88,5 @@ OpenEBS Features, like any storage solution, can be broadly classified into foll - [Use Cases and Examples](use-cases-and-examples.mdx) - [OpenEBS Benefits](benefits.mdx) - [OpenEBS Architecture](../concepts/architecture.md) -- [OpenEBS Local Engine](../concepts/data-engines/local-engine.md) -- [OpenEBS Replicated Engine](../concepts/data-engines/replicated-engine.md) +- [OpenEBS Local Storage](../concepts/data-engines/local-storage.md) +- [OpenEBS Replicated Storage](../concepts/data-engines/replicated-engine.md) diff --git a/docs/main/introduction-to-openebs/introduction-to-openebs.md b/docs/main/introduction-to-openebs/introduction-to-openebs.md index 7022e1641..4841921be 100644 --- a/docs/main/introduction-to-openebs/introduction-to-openebs.md +++ b/docs/main/introduction-to-openebs/introduction-to-openebs.md @@ -5,12 +5,12 @@ slug: / keywords: - OpenEBS - OpenEBS overview -description: OpenEBS builds on Kubernetes to enable Stateful applications to easily access Dynamic Local or Distributed Container Attached Kubernetes Persistent Volumes. By using the Container Native Storage pattern users report lower costs, easier management, and more control for their teams. +description: OpenEBS builds on Kubernetes to enable Stateful applications to easily access Dynamic Local or Replicated Container Attached Kubernetes Persistent Volumes. By using the Container Native Storage pattern users report lower costs, easier management, and more control for their teams. --- ## What is OpenEBS? -OpenEBS turns any storage available to Kubernetes worker nodes into Local or Distributed Kubernetes Persistent Volumes. OpenEBS helps application and platform teams easily deploy Kubernetes stateful workloads that require fast and highly durable, reliable, and scalable [Container Native Storage](../concepts/container-native-storage.md). +OpenEBS turns any storage available to Kubernetes worker nodes into Local or Replicated Kubernetes Persistent Volumes. OpenEBS helps application and platform teams easily deploy Kubernetes stateful workloads that require fast and highly durable, reliable, and scalable [Container Native Storage](../concepts/container-native-storage.md). OpenEBS is also a leading choice for NVMe based storage deployments. @@ -28,25 +28,24 @@ The [OpenEBS Adoption stories](https://github.com/openebs/openebs/blob/master/AD ## What does OpenEBS do? -OpenEBS manages the storage available on each of the Kubernetes nodes and uses that storage to provide [Local](#local-volumes) or [Distributed(aka Replicated)](#replicated-volumes) Persistent Volumes to Stateful workloads. +OpenEBS manages the storage available on each of the Kubernetes nodes and uses that storage to provide [Local](#local-volumes) or [Replicated](#replicated-volumes) Persistent Volumes to Stateful workloads. ![data-engines-comparision](../assets/data-engines-comparision.svg) In case of [Local Volumes](#local-volumes): -- OpenEBS can create persistent volumes using raw block devices or partitions, or using sub-directories on Hostpaths or by using local engine or sparse files. +- OpenEBS can create persistent volumes, or using sub-directories on Hostpaths or by using locally attached storage or sparse files or over existing LVM or ZFS stack. - The local volumes are directly mounted into the Stateful Pod, without any added overhead from OpenEBS in the data path, decreasing latency. -- OpenEBS provides additional tooling for local volumes for monitoring, backup/restore, disaster recovery, snapshots when backed by local engine, capacity based scheduling, and more. +- OpenEBS provides additional tooling for local volumes for monitoring, backup/restore, disaster recovery, snapshots when backed by LVM or ZFS stack, capacity based scheduling, and more. -In case of [Distributed (aka Replicated) Volumes](#replicated-volumes): +In case of [Replicated Volumes](#replicated-volumes): -- OpenEBS creates a Micro-service for each Distributed Persistent Volume using the replicated engine. -- The Stateful Pod writes the data to the OpenEBS engine that synchronously replicates the data to multiple nodes in the cluster. The OpenEBS engine itself is deployed as a pod and orchestrated by Kubernetes. When the node running the Stateful pod fails, the pod will be rescheduled to another node in the cluster and OpenEBS provides access to the data using the available data copies on other nodes. -- The Stateful Pods connect to the OpenEBS distributed persistent volume using the NVMeoF (replicated engine). -- OpenEBS replicated engine is developed with durability and performance as design goals. It efficiently manages the compute (hugepages and cores) and storage (NVMe Drives) to provide fast distributed block storage. +- OpenEBS Replicated Storage creates an NVMe target accessible over TCP, for each persistent volume. +- The Stateful Pod writes the data to the NVMe-TCP target that synchronously replicates the data to multiple nodes in the cluster. The OpenEBS engine itself is deployed as a pod and orchestrated by Kubernetes. When the node running the Stateful pod fails, the pod will be rescheduled to another node in the cluster and OpenEBS provides access to the data using the available data copies on other nodes. +- OpenEBS Replicated Storage is developed with durability and performance as design goals. It efficiently manages the compute (hugepages and cores) and storage (NVMe Drives) to provide fast block storage. :::note -OpenEBS contributors prefer to call the Distributed Block Storage volumes as **Replicated Volumes**, to avoid confusion with traditional distributed block storage for the following reasons: +OpenEBS contributors prefer to call the Distributed Block Storage volumes as **Replicated Volumes**, to avoid confusion with traditional block storage for the following reasons: * Distributed block storage tends to shard the data blocks of a volume across many nodes in the cluster. Replicated volumes persist all the data blocks of a volume on a node and for durability replicate the entire data to other nodes in the cluster. * While accessing a volume data, distributed block storage depends on metadata hashing algorithms to locate the node where the block resides, whereas replicated volumes can access the data from any of the nodes where data is persisted (aka replica nodes). * Replicated volumes have a lower blast radius compared to traditional distributed block storage. @@ -66,12 +65,12 @@ Replicated Volumes, as the name suggests, are those that have their data synchro Replicated Volumes also are capable of enterprise storage features like snapshots, clone, volume expansion and so forth. Replicated Volumes are a preferred choice for Stateful workloads like Percona/MongoDB, Jira, GitLab, etc. :::info -Depending on the type of storage attached to your Kubernetes worker nodes and the requirements of your workloads, you can select from local engine or replicated engine. +Depending on the type of storage attached to your Kubernetes worker nodes and the requirements of your workloads, you can select from Local Storage or Replicated Storage. ::: ## Quickstart Guides -Installing OpenEBS in your cluster is as simple as running a few `kubectl` or `helm` commands. Refer to our [Quickstart guide](../quickstart-guide/quickstart.md) for more information. +Installing OpenEBS in your cluster is as simple as running a few `kubectl` or `helm` commands. Refer to our [Quickstart guide](../quickstart-guide) for more information. ## Community Support via Slack @@ -79,11 +78,11 @@ OpenEBS has a vibrant community that can help you get started. If you have furth ## See Also -- [Quickstart](../quickstart-guide/quickstart.md) +- [Quickstart](../quickstart-guide) - [Installation](../quickstart-guide/installation.md) - [Deployment](../quickstart-guide/deploy-a-test-application.md) - [Use Cases and Examples](use-cases-and-examples.mdx) - [Container Native Storage (CNS)](../concepts/container-native-storage.md) - [OpenEBS Architecture](../concepts/architecture.md) -- [OpenEBS Local Engine](../concepts/data-engines/local-engine.md) -- [OpenEBS Replicated Engine](../concepts/data-engines/replicated-engine.md) +- [OpenEBS Local Storage](../concepts/data-engines/local-storage.md) +- [OpenEBS Replicated Storage](../concepts/data-engines/replicated-engine.md) diff --git a/docs/main/introduction-to-openebs/use-cases-and-examples.mdx b/docs/main/introduction-to-openebs/use-cases-and-examples.mdx index 7771141ee..18b049021 100644 --- a/docs/main/introduction-to-openebs/use-cases-and-examples.mdx +++ b/docs/main/introduction-to-openebs/use-cases-and-examples.mdx @@ -113,7 +113,7 @@ Examples: ### Self-managed Object Storage Service -Use OpenEBS and MinIO on Kubernetes to build cross AZ cloud native object storage solution. Kubernetes PVCs are used by MinIO to seamlessly scale MinIO nodes. OpenEBS provides easily scalable and manageable storage pools including local engine. Scalability of MinIO is directly complimented by OpenEBS's feature of cloud-native scalable architecture. +Use OpenEBS and MinIO on Kubernetes to build cross AZ cloud native object storage solution. Kubernetes PVCs are used by MinIO to seamlessly scale MinIO nodes. OpenEBS provides easily scalable and manageable storage pools including Local Storage. Scalability of MinIO is directly complimented by OpenEBS's feature of cloud-native scalable architecture. Examples: @@ -148,5 +148,5 @@ Examples: - [Use Cases and Examples](use-cases-and-examples.mdx) - [OpenEBS Benefits](benefits.mdx) - [OpenEBS Architecture](../concepts/architecture.md) -- [OpenEBS Local Engine](../concepts/data-engines/local-engine.md) -- [OpenEBS Replicated Engine](../concepts/data-engines/replicated-engine.md) +- [OpenEBS Local Storage](../concepts/data-engines/local-storage.md) +- [OpenEBS Replicated Storage](../concepts/data-engines/replicated-engine.md) diff --git a/docs/main/quickstart-guide/deploy-a-test-application.md b/docs/main/quickstart-guide/deploy-a-test-application.md index c96f1842a..f032fe9cc 100644 --- a/docs/main/quickstart-guide/deploy-a-test-application.md +++ b/docs/main/quickstart-guide/deploy-a-test-application.md @@ -9,9 +9,9 @@ description: This section will help you to deploy a test application. --- :::info -- See [Local PV LVM User Guide](../user-guides/local-engine-user-guide/lvm-localpv.md) to deploy Local PV LVM. -- See [Local PV ZFS User Guide](../user-guides/local-engine-user-guide/zfs-localpv.md) to deploy Local PV ZFS. -- See [Replicated Engine Deployment](../user-guides/replicated-engine-user-guide/replicated-engine-deployment.md) to deploy Replicated Engine (fka Mayastor). +- See [Local PV LVM User Guide](../user-guides/local-storage-user-guide/lvm-localpv.md) to deploy Local PV LVM. +- See [Local PV ZFS User Guide](../user-guides/local-storage-user-guide/zfs-localpv.md) to deploy Local PV ZFS. +- See [Replicated Storage Deployment](../user-guides/replicated-engine-user-guide/replicated-engine-deployment.md) to deploy Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor). ::: # Deploy an Application @@ -217,8 +217,8 @@ Once the workloads are up and running, the platform or the operations team can o ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Local PV Hostpath](../user-guides/local-engine-user-guide/localpv-hostpath.md) -- [Local PV LVM](../user-guides/local-engine-user-guide/lvm-localpv.md) -- [Local PV ZFS](../user-guides/local-engine-user-guide/zfs-localpv.md) -- [Replicated Engine](../user-guides/replicated-engine-user-guide/) \ No newline at end of file +- [Installation](installation.md) +- [Local PV Hostpath](../user-guides/local-storage-user-guide/localpv-hostpath.md) +- [Local PV LVM](../user-guides/local-storage-user-guide/lvm-localpv.md) +- [Local PV ZFS](../user-guides/local-storage-user-guide/zfs-localpv.md) +- [Replicated Storage](../user-guides/replicated-engine-user-guide/) \ No newline at end of file diff --git a/docs/main/quickstart-guide/installation.md b/docs/main/quickstart-guide/installation.md index c8d3da2d1..5a9babf90 100644 --- a/docs/main/quickstart-guide/installation.md +++ b/docs/main/quickstart-guide/installation.md @@ -19,9 +19,9 @@ The OpenEBS workflow fits nicely into the reconcilation pattern introduced by Ku ## Prerequisites -If this is your first time installing OpenEBS Local Engine, make sure that your Kubernetes nodes meet the [required prerequisites](../user-guides/local-engine-user-guide/prerequisites.mdx). +If this is your first time installing OpenEBS Local Storage (a.k.a Local Engines), make sure that your Kubernetes nodes meet the [required prerequisites](../user-guides/local-storage-user-guide). -For OpenEBS Replicated Engine, make sure that your Kubernetes nodes meet the [required prerequisites](../user-guides/replicated-engine-user-guide/prerequisites.md). +For OpenEBS Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor), make sure that your Kubernetes nodes meet the [required prerequisites](../user-guides/replicated-engine-user-guide/prerequisites.md). At a high level OpenEBS requires: @@ -54,7 +54,7 @@ OpenEBS provides several options that you can customize during install like: - specifying the directory where hostpath volume data is stored or - specifying the nodes on which OpenEBS components should be deployed and so forth. -The default OpenEBS helm chart will install both local engines and replicated engine. Refer to [OpenEBS helm chart documentation](https://github.com/openebs/charts/tree/master/charts/openebs) for full list of customizable options and using other flavors of OpenEBS data engines by setting the correct helm values. +The default OpenEBS helm chart will install both Local Storage and Replicated Storage. Refer to [OpenEBS helm chart documentation](https://github.com/openebs/charts/tree/master/charts/openebs) for full list of customizable options and using other flavors of OpenEBS data engines by setting the correct helm values. Install OpenEBS helm chart with default values. @@ -62,10 +62,10 @@ Install OpenEBS helm chart with default values. helm install openebs --namespace openebs openebs/openebs --create-namespace ``` -The above commands will install OpenEBS LocalPV Hostpath, OpenEBS LocalPV LVM, OpenEBS LocalPV ZFS, and OpenEBS Replicated Engine components in `openebs` namespace and chart name as `openebs`. +The above commands will install OpenEBS LocalPV Hostpath, OpenEBS LocalPV LVM, OpenEBS LocalPV ZFS, and OpenEBS Replicated Storage components in `openebs` namespace and chart name as `openebs`. :::note -If you do not want to install OpenEBS Replicated Engine, use the following command: +If you do not want to install OpenEBS Replicated Storage, use the following command: ``` helm install openebs --namespace openebs openebs/openebs --set mayastor.enabled=false --create-namespace @@ -140,7 +140,7 @@ openebs-zfs-localpv-node-svfgq 2/2 Running 0 1 openebs-zfs-localpv-node-wm9ks 2/2 Running 0 11m ``` -#### Installation with Replicated Engine Disabled +#### Installation with Replicated Storage Disabled List the pods in `` namespace @@ -183,11 +183,11 @@ openebs-single-replica io.openebs.csi-mayastor Delete Immediate ## Post-Installation Considerations -For testing your OpenEBS installation, you can use the `openebs-hostpath` mentioned in the [Local Engine User Guide](../user-guides/local-engine-user-guide/) for provisioning Local PV on hostpath. +For testing your OpenEBS installation, you can use the `openebs-hostpath` mentioned in the [Local Storage User Guide](../user-guides/local-storage-user-guide/) for provisioning Local PV on hostpath. You can follow through the below user guides for each of the engines to use storage devices available on the nodes instead of the `/var/openebs` directory to save the data. -- [Local Engine User Guide](../user-guides/local-engine-user-guide/) -- [Replicated Engine User Guide](../user-guides/replicated-engine-user-guide/) +- [Local Storage User Guide](../user-guides/local-storage-user-guide/) +- [Replicated Storage User Guide](../user-guides/replicated-engine-user-guide/) ## See Also diff --git a/docs/main/troubleshooting/troubleshooting-local-engine.md b/docs/main/troubleshooting/troubleshooting-local-storage.md similarity index 99% rename from docs/main/troubleshooting/troubleshooting-local-engine.md rename to docs/main/troubleshooting/troubleshooting-local-storage.md index b317e6d04..429e5491d 100644 --- a/docs/main/troubleshooting/troubleshooting-local-engine.md +++ b/docs/main/troubleshooting/troubleshooting-local-storage.md @@ -1,11 +1,11 @@ --- id: troubleshooting -title: Troubleshooting - Local Engine +title: Troubleshooting - Local Storage slug: /troubleshooting keywords: - OpenEBS - OpenEBS troubleshooting -description: This page contains a list of OpenEBS related troubleshooting which contains information like troubleshooting installation, troubleshooting uninstallation, and troubleshooting local engines. +description: This page contains a list of OpenEBS related troubleshooting which contains information like troubleshooting installation, troubleshooting uninstallation, and troubleshooting local storage. --- General Troubleshooting @@ -188,7 +188,7 @@ Error: release openebs failed: clusterroles.rbac.authorization.k8s.io "openebs" **Troubleshooting** -You must enable RBAC on Azure before OpenEBS installation. For more details, see [Prerequisites](../user-guides/local-engine-user-guide/prerequisites.mdx). +You must enable RBAC on Azure before OpenEBS installation. For more details, see [Prerequisites](../quickstart-guide/installation.md). ### A multipath.conf file claims all SCSI devices in OpenShift {#multipath-conf-claims-all-scsi-devices-openshift} diff --git a/docs/main/user-guides/data-migration/migration-using-pv-migrate.md b/docs/main/user-guides/data-migration/migration-using-pv-migrate.md index 9aaf995e6..5e4387000 100644 --- a/docs/main/user-guides/data-migration/migration-using-pv-migrate.md +++ b/docs/main/user-guides/data-migration/migration-using-pv-migrate.md @@ -22,8 +22,8 @@ This section describes the process of migrating the legacy storage to latest sto Data migration is the process of moving data from a source storage to a destination storage. In OpenEBS context, the users can migrate the data from legacy OpenEBS storage to the latest OpenEBS storage. There are different techniques/methodologies for performing data migration. Users can perform data migration within the same Kubernetes cluster or across Kubernetes clusters. The following guides outline several methodologies for migrating from legacy OpenEBS storage to latest OpenEBS storage: -- [Migration using pv-migrate Utility](#migration-using-pv-migrate) -- [Migration using velero Utility](../migration/migration-using-velero/) +- [Migration using pv-migrate](#migration-using-pv-migrate) +- [Migration using Velero](../../user-guides/data-migration/migration-using-velero/overview.md) :::info Users of non-OpenEBS storage solutions can also use these approaches described below to migrate their data to OpenEBS storage. @@ -52,7 +52,7 @@ The binary can be used as specified in the migrate flows. ## Migration from Local PV Device to Local PV LVM :::info -.The following example describes the steps to migrate data from legacy OpenEBS Local PV Device storage to OpenEBS Local PV LVM storage. Legacy OpenEBS Local PV ZFS storage users can also use the below steps to migrate to OpenEBS Local PV LVM storage. +The following example describes the steps to migrate data from legacy OpenEBS Local PV Device storage to OpenEBS Local PV LVM storage. Legacy OpenEBS Local PV ZFS storage users can also use the below steps to migrate to OpenEBS Local PV LVM storage. ::: ### Assumptions @@ -88,9 +88,9 @@ db.admin.insertMany([{name: "Max"}, {name:"Alex"}]) Follow the steps below to migrate OpenEBS Local PV Device to OpenEBS Local PV LVM. -1. [Install Local Engine](../../../quickstart-guide/installation.md) on your cluster. +1. [Install Local Storage](../../quickstart-guide/installation.md) on your cluster. -2. Create a LVM PVC of the same [configuration](../../../user-guides/local-engine-user-guide/lvm-localpv.md#configuration). +2. Create a LVM PVC of the same [configuration](../../user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md). :::info For the LVM volume to be created, the node (where the application was deployed) needs to be same as that of where Volume Group (VG) is created. @@ -190,7 +190,7 @@ The Local PV Device volumes and pools can now be removed and Local PV Device can ## Migration from cStor to Replicated :::info -The following example describes the steps to migrate data from legacy OpenEBS CStor storage to OpenEBS Replicated (f.k.a Mayastor) storage. Legacy OpenEBS Jiva storage users can also use the below steps to migrate to OpenEBS Replicated. +The following example describes the steps to migrate data from legacy OpenEBS CStor storage to OpenEBS Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor). Legacy OpenEBS Jiva storage users can also use the below steps to migrate to OpenEBS Replicated. ::: ### Assumptions @@ -226,9 +226,9 @@ db.admin.insertMany([{name: "Max"}, {name:"Alex"}]) Follow the steps below to migrate OpenEBS cStor to OpenEBS Replicated (fka Mayastor). -1. [Install Replicated Engine](../../../quickstart-guide/installation.md) on your cluster. +1. [Install Replicated Storage](../../quickstart-guide/installation.md) on your cluster. -2. Create a replicated PVC of the same [configuration](../../../user-guides/replicated-engine-user-guide/replicated-engine-deployment.md). See the example below: +2. Create a replicated PVC of the same [configuration](../../user-guides/replicated-engine-user-guide/replicated-engine-deployment.md). See the example below: ``` apiVersion: v1 diff --git a/docs/main/user-guides/data-migration/migration-using-velero/migration-for-distributed-db/distributeddb-restore.md b/docs/main/user-guides/data-migration/migration-using-velero/migration-for-distributed-db/distributeddb-restore.md index 1d24cfbf0..94cd13ca5 100644 --- a/docs/main/user-guides/data-migration/migration-using-velero/migration-for-distributed-db/distributeddb-restore.md +++ b/docs/main/user-guides/data-migration/migration-using-velero/migration-for-distributed-db/distributeddb-restore.md @@ -6,9 +6,9 @@ keywords: - Restoring to Replicated Storage description: This section explains how to Restore from cStor Backup to Replicated Storage for Distributed DBs. --- -# Steps to Restore from cStor Backup to Replicated Storage for Distributed DBs (Cassandra) +## Steps to Restore from cStor Backup to Replicated Storage (a.k.a Replicated Engine and f.k.a Mayastor) for Distributed DBs (Cassandra) -Cassandra is a popular NoSQL database used for handling large amounts of data with high availability and scalability. In Kubernetes environments, managing and restoring Cassandra backups efficiently is crucial. In this article, we will walk you through the process of restoring a Cassandra database in a Kubernetes cluster using Velero, and we will change the storage class to Replicated Storage (f.k.a Mayastor) for improved performance. +Cassandra is a popular NoSQL database used for handling large amounts of data with high availability and scalability. In Kubernetes environments, managing and restoring Cassandra backups efficiently is crucial. In this article, we will walk you through the process of restoring a Cassandra database in a Kubernetes cluster using Velero, and we will change the storage class to Replicated Storage for improved performance. :::info Before you begin, make sure you have the following: @@ -17,7 +17,7 @@ Before you begin, make sure you have the following: - Replicated Storage configured in your Kubernetes environment. ::: -## Step 1: Set Up Kubernetes Credentials and Install Velero +### Step 1: Set Up Kubernetes Credentials and Install Velero Set up your Kubernetes cluster credentials for the target cluster where you want to restore your Cassandra database. Use the same values for the BUCKET-NAME and SECRET-FILENAME placeholders that you used during the initial Velero installation. This ensures that Velero has the correct credentials to access the previously saved backups. @@ -33,7 +33,7 @@ Install Velero with the necessary plugins, specifying your backup bucket, secret velero get backup | grep YOUR_BACKUP_NAME ``` -## Step 2: Verify Backup Availability and Check BackupStorageLocation Status +### Step 2: Verify Backup Availability and Check BackupStorageLocation Status Confirm that your Cassandra backup is available in Velero. This step ensures that there are no credentials or bucket mismatches: @@ -47,7 +47,7 @@ Check the status of the BackupStorageLocation to ensure it is available: kubectl get backupstoragelocation -n velero ``` -## Step 3: Create a Restore Request +### Step 3: Create a Restore Request Create a Velero restore request for your Cassandra backup: @@ -55,7 +55,7 @@ Create a Velero restore request for your Cassandra backup: velero restore create RESTORE_NAME --from-backup YOUR_BACKUP_NAME ``` -## Step 4: Monitor Restore Progress +### Step 4: Monitor Restore Progress Monitor the progress of the restore operation using the below commands: @@ -81,7 +81,7 @@ kubectl get pvc -n cassandra kubectl get pods -n cassandra ``` -## Step 5: Back Up PVC YAML +### Step 5: Back Up PVC YAML Create a backup of the Persistent Volume Claims (PVCs) and then modify their storage class to `mayastor-single-replica`. @@ -167,7 +167,7 @@ metadata: resourceVersion: "" ``` -## Step 6: Delete and Recreate PVCs +### Step 6: Delete and Recreate PVCs Delete the pending PVCs and apply the modified PVC YAML to recreate them with the new storage class: @@ -179,7 +179,7 @@ kubectl delete pvc PVC_NAMES -n cassandra kubectl apply -f cassandra_pvc.yaml -n cassandra ``` -## Step 7: Observe Velero Init Container and Confirm Restore +### Step 7: Observe Velero Init Container and Confirm Restore Observe the Velero init container as it restores the volumes for the Cassandra pods. This process ensures that your data is correctly recovered. @@ -214,9 +214,9 @@ Run this command to check if all the pods are running: kubectl get pods -n cassandra ``` -## Step 8: Verify Cassandra Data and StatefulSet +### Step 8: Verify Cassandra Data and StatefulSet -### Access a Cassandra Pod using cqlsh and Check the Data +#### Access a Cassandra Pod using cqlsh and Check the Data - You can use the following command to access the Cassandra pods. This command establishes a connection to the Cassandra database running on pod `cassandra-1`: @@ -243,7 +243,7 @@ cassandra@cqlsh:openebs> select * from openebs.data; - After verifying the data, you can exit the Cassandra shell by typing `exit`. -### Modify your Cassandra StatefulSet YAML to use the Replicated Storage-Single-Replica Storage Class +#### Modify your Cassandra StatefulSet YAML to use the Replicated Storage-Single-Replica Storage Class - Before making changes to the Cassandra StatefulSet YAML, create a backup to preserve the existing configuration by running the following command: @@ -307,7 +307,7 @@ spec: kubectl apply -f cassandra_sts_modified.yaml ``` -### Delete the Cassandra StatefulSet with the --cascade=orphan Flag +#### Delete the Cassandra StatefulSet with the --cascade=orphan Flag Delete the Cassandra StatefulSet while keeping the pods running without controller management: @@ -315,7 +315,7 @@ Delete the Cassandra StatefulSet while keeping the pods running without controll kubectl delete sts cassandra -n cassandra --cascade=orphan ``` -### Recreate the Cassandra StatefulSet using the Updated YAML +#### Recreate the Cassandra StatefulSet using the Updated YAML - Use the kubectl apply command to apply the modified StatefulSet YAML configuration file, ensuring you are in the correct namespace where your Cassandra deployment resides. Replace with the actual path to your YAML file. diff --git a/docs/main/user-guides/local-engine-user-guide/additional-information/alphafeatures.md b/docs/main/user-guides/local-storage-user-guide/additional-information/alphafeatures.md similarity index 96% rename from docs/main/user-guides/local-engine-user-guide/additional-information/alphafeatures.md rename to docs/main/user-guides/local-storage-user-guide/additional-information/alphafeatures.md index 1ff0547e5..4fd4e858f 100644 --- a/docs/main/user-guides/local-engine-user-guide/additional-information/alphafeatures.md +++ b/docs/main/user-guides/local-storage-user-guide/additional-information/alphafeatures.md @@ -40,7 +40,7 @@ The Data populator can be used to load seed data into a Kubernetes persistent vo ### Use Cases -1. Decommissioning of a node in the cluster: In scenarios where a Kubernetes node needs to be decommissioned whether for upgrade or maintenance, a data populator can be used to migrate the data saved in the local storage of the node, that has to be decommissioned. +1. Decommissioning of a node in the cluster: In scenarios where a Kubernetes node needs to be decommissioned whether for upgrade or maintenance, a data populator can be used to migrate the data saved in the Local Storage (a.k.a Local Engine) of the node, that has to be decommissioned. 2. Loading seed data to Kubernetes volumes: Data populator can be used to scale applications without using read-write many operation. The application can be pre-populated with the static content available in an existing PV. To get more details about Data Populator, see [here](https://github.com/openebs/data-populator#data-populator). diff --git a/docs/main/user-guides/local-engine-user-guide/additional-information/backupandrestore.md b/docs/main/user-guides/local-storage-user-guide/additional-information/backupandrestore.md similarity index 95% rename from docs/main/user-guides/local-engine-user-guide/additional-information/backupandrestore.md rename to docs/main/user-guides/local-storage-user-guide/additional-information/backupandrestore.md index 3baece4ab..a4589d191 100644 --- a/docs/main/user-guides/local-engine-user-guide/additional-information/backupandrestore.md +++ b/docs/main/user-guides/local-storage-user-guide/additional-information/backupandrestore.md @@ -8,9 +8,9 @@ keywords: description: This section explains how to backup and restore local engines. --- -## Backup and Restore +# Backup and Restore -OpenEBS Local Volumes can be backed up and restored along with the application using [Velero](https://velero.io). +OpenEBS Local Storage (a.k.a Local Engines or Local Volumes) can be backed up and restored along with the application using [Velero](https://velero.io). :::note The following steps assume that you already have Velero with Restic integration is configured. If not, follow the [Velero Documentation](https://velero.io/docs/) to proceed with install and setup of Velero. If you encounter any issues or have questions, talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). diff --git a/docs/main/user-guides/local-engine-user-guide/additional-information/k8supgrades.md b/docs/main/user-guides/local-storage-user-guide/additional-information/k8supgrades.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/additional-information/k8supgrades.md rename to docs/main/user-guides/local-storage-user-guide/additional-information/k8supgrades.md diff --git a/docs/main/user-guides/local-engine-user-guide/additional-information/kb.md b/docs/main/user-guides/local-storage-user-guide/additional-information/kb.md similarity index 66% rename from docs/main/user-guides/local-engine-user-guide/additional-information/kb.md rename to docs/main/user-guides/local-storage-user-guide/additional-information/kb.md index 9817e767d..e70b0cb7a 100644 --- a/docs/main/user-guides/local-engine-user-guide/additional-information/kb.md +++ b/docs/main/user-guides/local-storage-user-guide/additional-information/kb.md @@ -123,227 +123,6 @@ There are some cases where it had to delete the StatefulSet and re-install a new [Go to top](#top) -### How to install OpenEBS in OpenShift 4.x {#openshift-install} - -#### Tested versions - -OpenEBS has been tested in the following configurations; - -| OpenShift Version | OS | Status | -|-------------------|----------------------------------------------|--------| -| 4.2 | [RHEL7.6](../prerequisites.mdx), CoreOS 4.2 | Tested | -| 3.10 | [RHEL7.6](../prerequisites.mdx), CoreOS 4.2 | Tested | - -#### Notes on security - -**Note:** Earlier documentation for installing OpenEBS on OpenShift required disabling SELinux. This is no longer necessary - SELinux does not need to be disabled now. - -**Note:** However, the OpenEBS operator, and some projects that use OpenEBS -volumes do require privileged Security Context Constraints. This is described below. - -#### Installation option: via the OperatorHub - -The easiest way to install OpenEBS is by using the operator in the OperatorHub; - -![OpenShift in OperatorHub](../../../assets/openshift-operatorhub.png) - -This guide recommends installing the operator into an empty `openebs` -namespace. - -![OpenShift in OperatorHub](../../../assets/openshift-operator-installnamespace.png) - -#### Installation option: via "manual" install - -1. Find the latest OpenEBS release version from [here](/introduction/releases) and download the latest OpenEBS operator YAML in your master node. The latest openebs-operator YAML file can be downloaded using the following way. - - ``` - wget https://openebs.github.io/charts/openebs-operator-1.2.0.yaml - ``` - -2. Apply the modified the YAML using the following command. - - ``` - kubectl apply -f openebs-operator-1.2.0.yaml - ``` - -#### Adding `privileged` SCC to the `openebs-maya-operator` service account - -The examples below assume you have installed OpenEBS in the `OpenEBS` project. -If you have used another namespace, change `-n` accordingly. - - -Add the `privileged` SecurityContextConstraint (SCC) to the OpenEBS service account; - - ``` - oc adm policy add-scc-to-user privileged -z openebs-maya-operator -n openebs - ``` - -Example output: - - ```shell hideCopy - securitycontextconstraints.security.openshift.io/privileged added to: ["system:serviceaccount:openebs:openebs-maya-operator"] - ``` - -#### Quickly verifying the installation - -Verify OpenEBS pod status by using `kubectl get pods -n openebs`, all pods -should be "Running" after a few minutes. If pods are not running after a few -minutes, start debugging with `oc get events` and viewing these container logs. -``` -NAME READY STATUS RESTARTS AGE -maya-apiserver-594699887-4x6bj 1/1 Running 0 60m -openebs-admission-server-544d8fb47b-lxd52 1/1 Running 0 60m -openebs-localpv-provisioner-59f96b699-dpf8l 1/1 Running 0 60m -openebs-ndm-4v6kj 1/1 Running 0 60m -openebs-ndm-8g226 1/1 Running 0 60m -openebs-ndm-kkpk7 1/1 Running 0 60m -openebs-ndm-operator-74d9c78cdc-lbtqt 1/1 Running 0 60m -openebs-provisioner-5dfd95987b-nhwb9 1/1 Running 0 60m -openebs-snapshot-operator-5d58bd848b-94nnt 2/2 Running 0 60m -``` -If you are seeing errors with `hostNetwork` or similar, this is likely because -the serviceAccount for that container has not been added to the `privileged` SCC. - -**Next Steps:** - -* You may want to fully [verifying the OpenEBS installation](/user-guides/installation#verifying-openebs-installation) in more detail. -* After verification, you probably want to [select a CAS - Engine](/concepts/casengines). - -#### Adding `privileged` SCC to projects that use OpenEBS volumes - -When you create a PVC using a StorageClass for OpenEBS, a `ctrl` and `rep` -deployment will be created for that PVC. The `rep` containers also need to be -privileged. - -Switch to a project that is using OpenEVS PVs; - -``` -oc project myproject -``` - -To stop the whole project running as privileged, you could create a new serviceAccount for the project, and only run the Deployment/pvc-...-rep using that service account. - -However, an easy (and lazy, insecure) workaround is change this project's -`default` ServiceAccount to be privileged. - -``` -oc adm policy add-scc-to-user privileged -z default -n myproject -``` - -**Note:** OpenShift automatically creates a project for every namespace, and a `default` ServiceAccount for every project. - -Once these permissions have been granted, you can provision persistent volumes using OpenEBS. See [CAS Engines](/concepts/casengines) for more details. - -[Go to top](#top) - -### How to enable Admission-Controller in OpenShift 3.10 and above {#enable-admission-controller-in-openshift} - -The following procedure will help to enable admission-controller in OpenShift 3.10 and above. - -1. Update the `/etc/origin/master/master-config.yaml` file with below configuration. - - ``` - admissionConfig: - pluginConfig: - ValidatingAdmissionWebhook: - configuration: - kind: DefaultAdmissionConfig - apiVersion: v1 - disable: false - MutatingAdmissionWebhook: - configuration: - kind: DefaultAdmissionConfig - apiVersion: v1 - disable: false - ``` - -2. Restart the API and controller services using the following commands. - - ``` - # master-restart api - # master-restart controllers - ``` - -[Go to top](#top) - -### How to setup default PodSecurityPolicy to allow the OpenEBS pods to work with all permissions? - - -Apply the following YAML in your cluster. - -- Create a Privileged PSP - - ``` - apiVersion: extensions/v1beta1 - kind: PodSecurityPolicy - metadata: - name: privileged - annotations: - seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' - spec: - privileged: true - allowPrivilegeEscalation: true - allowedCapabilities: - - '*' - volumes: - - '*' - hostNetwork: true - hostPorts: - - min: 0 - max: 65535 - hostIPC: true - hostPID: true - runAsUser: - rule: 'RunAsAny' - seLinux: - rule: 'RunAsAny' - supplementalGroups: - rule: 'RunAsAny' - fsGroup: - rule: 'RunAsAny' - ``` - -- Associate the above PSP to a ClusterRole - - ``` - kind: ClusterRole - apiVersion: rbac.authorization.k8s.io/v1 - metadata: - name: privilegedpsp - rules: - - apiGroups: ['extensions'] - resources: ['podsecuritypolicies'] - verbs: ['use'] - resourceNames: - - privileged - ``` - -- Associate the above Privileged ClusterRole to OpenEBS Service Account - - ``` - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - annotations: - rbac.authorization.kubernetes.io/autoupdate: "true" - name: openebspsp - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: privilegedpsp - subjects: - - kind: ServiceAccount - name: openebs-maya-operator - namespace: openebs - ``` - -- Proceed to install the OpenEBS. Note that the namespace and service account name used by the OpenEBS should match what is provided in the above ClusterRoleBinding. - -[Go to top](#top) - - - ### How to prevent container logs from exhausting disk space? {#enable-log-rotation-on-cluster-nodes} Container logs, if left unchecked, can eat into the underlying disk space causing `disk-pressure` conditions diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-configuration.md b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-configuration.md similarity index 95% rename from docs/main/user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-configuration.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-configuration.md index cf5b55b3b..ad8c869fd 100644 --- a/docs/main/user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-configuration.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-configuration.md @@ -83,5 +83,5 @@ If you encounter issues or have a question, file an [Github issue](https://githu ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file +- [Installation](../../../quickstart-guide/installation.md) +- [Deploy an Application](../../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-deployment.md b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md similarity index 81% rename from docs/main/user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-deployment.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md index 63df4877d..8bc25c1e2 100644 --- a/docs/main/user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-deployment.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md @@ -10,7 +10,7 @@ description: This section explains the instructions to deploy an application for This section explains the instructions to deploy an application for the OpenEBS Local Persistent Volumes (PV) backed by Hostpath. -For deployment instructions, see [here](../../quickstart-guide/deploy-a-test-application.md). +For deployment instructions, see [here](../local-pv-hostpath/hostpath-deployment.md). ## Cleanup @@ -33,5 +33,5 @@ If you encounter issues or have a question, file an [Github issue](https://githu ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file +- [Installation](hostpath-installation.md) +- [Configuration](hostpath-configuration.md) \ No newline at end of file diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md similarity index 95% rename from docs/main/user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-installation.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md index 2c4c71788..96458ddd1 100644 --- a/docs/main/user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-installation.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md @@ -57,5 +57,5 @@ If you encounter issues or have a question, file an [Github issue](https://githu ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file +- [Installation](../../../quickstart-guide/installation.md) +- [Deploy an Application](../../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-fs-group.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-fs-group.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-fs-group.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-fs-group.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-raw-block-volume.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-raw-block-volume.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-raw-block-volume.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-raw-block-volume.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-resize.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-resize.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-resize.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-resize.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-thin-provisioning.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-thin-provisioning.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-thin-provisioning.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-thin-provisioning.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/lvm-configuration.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md similarity index 99% rename from docs/main/user-guides/local-engine-user-guide/local-pv-lvm/lvm-configuration.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md index 6d61d0e73..4fb4851b3 100644 --- a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/lvm-configuration.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md @@ -10,7 +10,7 @@ keywords: description: This section explains the configuration requirements to set up OpenEBS Local Persistent Volumes (PV) backed by the LVM Storage. --- -## Configuration +# Configuration This section will help you to configure Local PV LVM. @@ -493,5 +493,5 @@ If you encounter issues or have a question, file an [Github issue](https://githu ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) +- [Installation](../../../quickstart-guide/installation.md) +- [Deploy an Application](../../../quickstart-guide/deploy-a-test-application.md) diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/lvm-deployment.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md similarity index 98% rename from docs/main/user-guides/local-engine-user-guide/local-pv-lvm/lvm-deployment.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md index 0c80502e1..b1f298709 100644 --- a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/lvm-deployment.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md @@ -238,5 +238,5 @@ If you encounter issues or have a question, file an [Github issue](https://githu ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file +- [Installation](../../../quickstart-guide/installation.md) +- [Deploy an Application](../../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-lvm/lvm-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-lvm/lvm-installation.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md similarity index 97% rename from docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md index f18f74e34..9717c0e31 100644 --- a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md @@ -10,9 +10,9 @@ keywords: description: This section talks about the advanced operations that can be performed in the OpenEBS Local Persistent Volumes (PV) backed by the ZFS Storage. --- -# Backup and Restore for Local PV ZFS Volumes +## Backup and Restore for Local PV ZFS Volumes -## Prerequisites +### Prerequisites We should have installed the ZFS-LocalPV 1.0.0 or later version for the Backup and Restore, see [readme](../README.md) for the steps to install the ZFS-LocalPV driver. @@ -27,13 +27,13 @@ We should have installed the ZFS-LocalPV 1.0.0 or later version for the Backup a - With velero version v1.5.2 and v1.5.3, there is an [issue](https://github.com/vmware-tanzu/velero/issues/3470) where PVs are not getting cleaned up for restored volume. ::: -## Setup +### Setup -### a. Install Velero Binary +#### a. Install Velero Binary Follow the steps mentioned [here](https://velero.io/docs/v1.5/basic-install/) to install velero CLI. -### b. Install Velero +#### b. Install Velero Setup the credential file. @@ -60,7 +60,7 @@ velero install --provider aws --bucket --secret-file <./aws-iam-cr Install the velero 1.5 or later version for ZFS-LocalPV. -### c. Deploy MinIO +#### c. Deploy MinIO Deploy the minIO to store the backup: @@ -83,7 +83,7 @@ restic-k7k4s 1/1 Running 0 69s velero-7d9c448bc5-j424s 1/1 Running 3 69s ``` -### d. Setup ZFS-LocalPV Plugin +#### d. Setup ZFS-LocalPV Plugin Install the Velero Plugin for Local PV ZFS using the command below: @@ -93,11 +93,11 @@ velero plugin add openebs/velero-plugin:2.2.0 Install the velero-plugin 2.2.0 or later version which has the support for ZFS-LocalPV. Once the setup is done, create the backup/restore. -## Create Backup +### Create Backup Three kinds of backups for Local PV ZFS can be created. Let us go through them one by one: -### 1. Create the *Full* Backup +#### 1. Create the *Full* Backup To take the full backup, create the Volume Snapshot Location as below: @@ -141,7 +141,7 @@ my-backup InProgress 2020-09-14 21:09:06 +0530 IST 29d default Once Status is `Completed`, the backup has been taken successfully. -### 2. Create the scheduled *Full* Backup +#### 2. Create the scheduled *Full* Backup To create the scheduled full backup, we can create the Volume Snapshot Location same as above to create the full backup: @@ -188,7 +188,7 @@ schd-20201012122706 InProgress 2020-10-12 17:57:06 +0530 IST 29d de The scheduled backup will have `-` format. Once Status is `Completed`, the backup has been taken successfully and then velero will take the next backup after 5 min and periodically keep doing that. -### 3. Create the scheduled *Incremental* Backup +#### 3. Create the scheduled *Incremental* Backup Incremental backup works for scheduled backup only. We can create the VolumeSnapshotLocation as below to create the incremental backup schedule :- @@ -244,7 +244,7 @@ schd-20201012132516 Completed 2020-10-12 18:55:18 +0530 IST 29d defa schd-20201012132115 Completed 2020-10-12 18:51:15 +0530 IST 29d default ``` -#### Explanation: +##### Explanation: Since we have used incrBackupCount as 3 in the volume snapshot location and created the backup. So first backup will be full backup and next 3 backup will be incremental @@ -268,7 +268,7 @@ It will stop at 3rd as we want to restore till schd-20201012133010. For us, it w Suppose we want to restore schd-20201012134010(5th backup), the plugin will restore schd-20201012134010 only as it is full backup and we want to restore till that point only. -## Restore +### Restore We can restore the backup using below command, we can provide the namespace mapping if we want to restore in different namespace. If namespace mapping is not provided, then it will restore in the source namespace in which the backup was present. @@ -285,7 +285,7 @@ my-backup-20200914211331 my-backup InProgress 0 0 2020-09- Once the Status is `Completed` we can check the pods in the destination namespace and verify that everything is up and running. We can also verify the data has been restored. -### Restore on a Different Node +#### Restore on a Different Node We have the node affinity set on the PV and the ZFSVolume object has the original node name as the owner of the Volume. While doing the restore if original node is not present, the Pod will not come into running state. We can use velero [RestoreItemAction](https://velero.io/docs/v1.5/restore-reference/#changing-pvc-selected-node) for this and create a config map which will have the node mapping like below: @@ -317,13 +317,13 @@ data: While doing the restore the ZFS-LocalPV plugin will set the affinity on the PV as per the node mapping provided in the config map. Here in the above case the PV created on nodes `pawan-old-node1` and `pawan-old-node2` will be moved to `pawan-new-node1` and `pawan-new-node2` respectively. -## Things to Consider: +### Things to Consider - Once VolumeSnapshotLocation has been created, we should never modify it, we should always create a new VolumeSnapshotLocation and use that. If we want to modify it, we should cleanup old backups/schedule first and then modify it and then create the backup/schedule. Also we should not switch the volumesnapshot location for the given scheduled backup, we should always create a new schedule if backups for the old schedule is present. - For the incremental backup, the higher the value of `incrBackupCount` the more time it will take to restore the volumes. So, we should not have very high number of incremental backup. -## UnInstall Velero +### Uninstall Velero We can delete the velero installation by using this command @@ -332,6 +332,6 @@ $ kubectl delete namespace/velero clusterrolebinding/velero $ kubectl delete crds -l component=velero ``` -## Reference +### Reference Check the [velero doc](https://velero.io/docs/) to find all the supported commands and options for the backup and restore. \ No newline at end of file diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-clone.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-clone.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-clone.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-clone.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-raw-block-volume.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-raw-block-volume.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-raw-block-volume.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-raw-block-volume.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-resize.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-resize.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-resize.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-resize.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-snapshot.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-snapshot.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-snapshot.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-snapshot.md diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/zfs-configuration.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md similarity index 99% rename from docs/main/user-guides/local-engine-user-guide/local-pv-zfs/zfs-configuration.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md index 5f6f3a05e..8376beaa8 100644 --- a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/zfs-configuration.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md @@ -10,7 +10,7 @@ keywords: description: This section explains the configuration requirements to set up OpenEBS Local Persistent Volumes (PV) backed by the ZFS Storage. --- -## Configuration +# Configuration This section will help you to configure Local PV ZFS. @@ -39,7 +39,7 @@ poolname: "zfspv-pool" poolname: "zfspv-pool/child" ``` -Also the dataset provided under `poolname` must exist on all the nodes with the name given in the storage class. Check the doc on storageclasses to know all the supported parameters for ZFS-LocalPV +Also the dataset provided under `poolname` must exist on all the nodes with the name given in the storage class. Check the doc on storageclasses to know all the supported parameters for Local PV ZFS. **ext2/3/4 or xfs or btrfs as FsType** If we provide fstype as one of ext2/3/4 or xfs or btrfs, the driver will create a ZVOL, which is a blockdevice carved out of ZFS Pool. This blockdevice will be formatted with corresponding filesystem before it's used by the driver. @@ -509,5 +509,5 @@ If you encounter issues or have a question, file an [Github issue](https://githu ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file +- [Installation](../../../quickstart-guide/installation.md) +- [Deploy an Application](../../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/zfs-deployment.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md similarity index 95% rename from docs/main/user-guides/local-engine-user-guide/local-pv-zfs/zfs-deployment.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md index c65d8797e..ac795b46b 100644 --- a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/zfs-deployment.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md @@ -81,5 +81,5 @@ If you encounter issues or have a question, file an [Github issue](https://githu ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file +- [Installation](../../../quickstart-guide/installation.md) +- [Deploy an Application](../../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file diff --git a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/zfs-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md similarity index 95% rename from docs/main/user-guides/local-engine-user-guide/local-pv-zfs/zfs-installation.md rename to docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md index 8abf3ef97..169cdd67c 100644 --- a/docs/main/user-guides/local-engine-user-guide/local-pv-zfs/zfs-installation.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md @@ -74,5 +74,5 @@ If you encounter issues or have a question, file an [Github issue](https://githu ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) +- [Installation](../../../quickstart-guide/installation.md) +- [Deploy an Application](../../../quickstart-guide/deploy-a-test-application.md) diff --git a/docs/main/user-guides/local-engine-user-guide/localpv-hostpath.md b/docs/main/user-guides/local-storage-user-guide/localpv-hostpath.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/localpv-hostpath.md rename to docs/main/user-guides/local-storage-user-guide/localpv-hostpath.md diff --git a/docs/main/user-guides/local-engine-user-guide/lvm-localpv.md b/docs/main/user-guides/local-storage-user-guide/lvm-localpv.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/lvm-localpv.md rename to docs/main/user-guides/local-storage-user-guide/lvm-localpv.md diff --git a/docs/main/user-guides/local-engine-user-guide/zfs-localpv.md b/docs/main/user-guides/local-storage-user-guide/zfs-localpv.md similarity index 100% rename from docs/main/user-guides/local-engine-user-guide/zfs-localpv.md rename to docs/main/user-guides/local-storage-user-guide/zfs-localpv.md diff --git a/docs/sidebars.js b/docs/sidebars.js index b5137c81e..c8ef6c2ce 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -69,8 +69,8 @@ module.exports = { }, { type: "doc", - id: "concepts/data-engines/localengine", - label: "Local Engine" + id: "concepts/data-engines/localstorage", + label: "Local Storage" }, { type: "doc", @@ -112,7 +112,7 @@ module.exports = { { collapsed: true, type: "category", - label: "Local Engine User Guide", + label: "Local Storage User Guide", customProps: { icon: "Book" }, @@ -127,17 +127,17 @@ module.exports = { items: [ { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-installation", + id: "user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation", label: "Installation" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-configuration", + id: "user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-configuration", label: "Configuration" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-hostpath/hostpath-deployment", + id: "user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment", label: "Deploy an Application" } ] @@ -152,17 +152,17 @@ module.exports = { items: [ { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-lvm/lvm-installation", + id: "user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation", label: "Installation" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-lvm/lvm-configuration", + id: "user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration", label: "Configuration" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-lvm/lvm-deployment", + id: "user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment", label: "Deploy an Application" }, { @@ -175,27 +175,27 @@ module.exports = { items: [ { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-fs-group", + id: "user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-fs-group", label: "FSGroup" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-raw-block-volume", + id: "user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-raw-block-volume", label: "Raw Block Volume" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-resize", + id: "user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-resize", label: "Resize" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot", + id: "user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot", label: "Snapshot" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-lvm/advanced-operations/lvm-thin-provisioning", + id: "user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-thin-provisioning", label: "Thin Provisioning" } ] @@ -212,17 +212,17 @@ module.exports = { items: [ { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-zfs/zfs-installation", + id: "user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation", label: "Installation" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-zfs/zfs-configuration", + id: "user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration", label: "Configuration" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-zfs/zfs-deployment", + id: "user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment", label: "Deploy an Application" }, { @@ -235,27 +235,27 @@ module.exports = { items: [ { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore", + id: "user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore", label: "Backup and Restore" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-clone", + id: "user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-clone", label: "Clone Volume" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-resize", + id: "user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-resize", label: "Volume Resize" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-snapshot", + id: "user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-snapshot", label: "Snapshot Volume" }, { type: "doc", - id: "user-guides/local-engine-user-guide/local-pv-zfs/advanced-operations/zfs-raw-block-volume", + id: "user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-raw-block-volume", label: "Raw Block Volume" } ] @@ -272,22 +272,22 @@ module.exports = { items: [ { type: "doc", - id: "user-guides/local-engine-user-guide/additional-information/alphafeatures", + id: "user-guides/local-storage-user-guide/additional-information/alphafeatures", label: "Alpha Features" }, { type: "doc", - id: "user-guides/local-engine-user-guide/additional-information/k8supgrades", + id: "user-guides/local-storage-user-guide/additional-information/k8supgrades", label: "Kubernetes Upgrades - Best Practices" }, { type: "doc", - id: "user-guides/local-engine-user-guide/additional-information/kb", + id: "user-guides/local-storage-user-guide/additional-information/kb", label: "Knowledge Base" }, { type: "doc", - id: "user-guides/local-engine-user-guide/additional-information/backupandrestore", + id: "user-guides/local-storage-user-guide/additional-information/backupandrestore", label: "Backup and Restore" } ] @@ -532,7 +532,7 @@ module.exports = { { type: "doc", id: "troubleshooting/troubleshooting", - label: "Troubleshooting - Local Engine" + label: "Troubleshooting - Local Storage" }, { type: "doc",