diff --git a/docs/main/community.md b/docs/main/community.md index 21b06dce..6942eed4 100644 --- a/docs/main/community.md +++ b/docs/main/community.md @@ -9,7 +9,7 @@ description: You can reach out to OpenEBS contributors and maintainers through S ## GitHub -Raise a [GitHub issue](https://github.com/openebs/openebs/issues/new) +Raise a [GitHub issue](https://github.com/openebs/openebs/issues/new). ## Slack @@ -26,8 +26,8 @@ Community blogs are available at [https://openebs.io/blog/](https://openebs.io/b Join our OpenEBS CNCF Mailing lists -- For OpenEBS project updates, subscribe to [OpenEBS Announcements](https://lists.cncf.io/g/cncf-openebs-announcements) -- For interacting with other OpenEBS users, subscribe to [OpenEBS Users](https://lists.cncf.io/g/cncf-openebs-users) +- For OpenEBS project updates, subscribe to [OpenEBS Announcements](https://lists.cncf.io/g/cncf-openebs-announcements). +- For interacting with other OpenEBS users, subscribe to [OpenEBS Users](https://lists.cncf.io/g/cncf-openebs-users). ## Community Meetings diff --git a/docs/main/concepts/architecture.md b/docs/main/concepts/architecture.md index f75aa27b..fce82f6c 100644 --- a/docs/main/concepts/architecture.md +++ b/docs/main/concepts/architecture.md @@ -25,7 +25,7 @@ The data engines are at the core of OpenEBS and are responsible for performing t The data engines are responsible for: - Aggregating the capacity available in the block devices allocated to them and then carving out volumes for applications. -- Provide standard system or network transport interfaces(NVMe) for connecting to local or remote volumes +- Provide standard system or network transport interfaces (NVMe) for connecting to local or remote volumes - Provide volume services like - synchronous replication, compression, encryption, maintaining snapshots, access to the incremental or full snapshots of data and so forth - Provide strong consistency while persisting the data to the underlying storage devices diff --git a/docs/main/concepts/data-engines/data-engines.md b/docs/main/concepts/data-engines/data-engines.md index 6120924c..705916fc 100644 --- a/docs/main/concepts/data-engines/data-engines.md +++ b/docs/main/concepts/data-engines/data-engines.md @@ -148,7 +148,7 @@ An important aspect of the OpenEBS Data Layer is that each volume replica is a f ### Use-cases for OpenEBS Replicated Storage -- When you need high performance storage using NVMe SSDs the cluster is capable of NVMeoF. +- When you need high performance storage using NVMe SSDs the cluster is capable of NVMe-oF. - When you need replication or availability features to protect against node failures. - Replicated Storage is designed for the next-gen compute and storage technology and is under active development. diff --git a/docs/main/concepts/data-engines/local-storage.md b/docs/main/concepts/data-engines/local-storage.md index 8d043f0c..da9a65a1 100644 --- a/docs/main/concepts/data-engines/local-storage.md +++ b/docs/main/concepts/data-engines/local-storage.md @@ -1,6 +1,6 @@ --- id: localstorage -title: OpenEBS Local Storage +title: Local Storage keywords: - Local Storage - OpenEBS Local Storage @@ -33,7 +33,7 @@ OpenEBS helps users to take local volumes into production by providing features ## Quickstart Guides -OpenEBS provides Local Volume that can be used to provide locally mounted storage to Kubernetes Stateful workloads. Refer to the [Quickstart Guide](../../quickstart-guide/) for more information. +OpenEBS provides Local Volume that can be used to provide locally mounted storage to Kubernetes Stateful workloads. Refer to the [Quickstart Guide](../../quickstart-guide/installation.md) for more information. ## When to use OpenEBS Local Storage? diff --git a/docs/main/concepts/data-engines/replicated-storage.md b/docs/main/concepts/data-engines/replicated-storage.md index 95d6e9ec..95357f87 100644 --- a/docs/main/concepts/data-engines/replicated-storage.md +++ b/docs/main/concepts/data-engines/replicated-storage.md @@ -3,6 +3,7 @@ id: replicated-storage title: Replicated Storage keywords: - Replicated Storage + - OpenEBS Replicated Storage description: In this document you will learn about Replicated Storage and its design goals. --- @@ -44,6 +45,6 @@ Join the vibrant [OpenEBS community on Kubernetes Slack](https://kubernetes.slac ## See Also - [OpenEBS Architecture](../architecture.md) -- [Replicated Storage Prerequisites](../../user-guides/replicated-storage-user-guide/prerequisites.md) +- [Replicated Storage Prerequisites](../../user-guides/replicated-storage-user-guide/rs-installation.md#prerequisites) - [Installation](../../quickstart-guide/installation.md) - [Replicated Storage User Guide](../../user-guides/replicated-storage-user-guide/rs-installation.md) diff --git a/docs/main/faqs/faqs.md b/docs/main/faqs/faqs.md index 77990d05..4513f44f 100644 --- a/docs/main/faqs/faqs.md +++ b/docs/main/faqs/faqs.md @@ -22,7 +22,7 @@ To determine exactly where your data is physically stored, you can run the follo * Run `kubectl get pvc` to fetch the volume name. The volume name looks like: *pvc-ee171da3-07d5-11e8-a5be-42010a8001be*. -* For each volume, you will notice one I/O controller pod and one or more replicas (as per the storage class configuration). You can use the volume ID (ee171da3-07d5-11e8-a5be-42010a8001be) to view information about the volume and replicas using the Replicated Storage [kubectl plugin](../user-guides/replicated-storage-user-guide/advanced-operations/kubectl-plugin.md) +* For each volume, you will notice one I/O controller pod and one or more replicas (as per the storage class configuration). You can use the volume ID (ee171da3-07d5-11e8-a5be-42010a8001be) to view information about the volume and replicas using the [kubectl plugin](../user-guides/replicated-storage-user-guide/advanced-operations/kubectl-plugin.md) [Go to top](#top) @@ -34,7 +34,7 @@ One of the major differences of OpenEBS versus other similar approaches is that ### How do you get started and what is the typical trial deployment? {#get-started} -To get started, you can follow the steps in the [quickstart guide](../quickstart-guide/installation.md) +To get started, you can follow the steps in the [quickstart guide](../quickstart-guide/installation.md). [Go to top](#top) @@ -97,7 +97,7 @@ env: ``` It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node. -Once we have labeled the node, we can install the lvm driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LVM-LocalPV CSI driver daemon sets (openebs-lvm-node). +Once we have labeled the node, we can install the lvm driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the Local PV LVM CSI driver daemon sets (openebs-lvm-node). ```sh @@ -110,7 +110,7 @@ openebs-lvm-node-gssh8 2/2 Running 0 5h28m openebs-lvm-node-twmx8 2/2 Running 0 5h28m ``` -We can verify that key has been registered successfully with the LVM LocalPV CSI Driver by checking the CSI node object yaml :- +We can verify that key has been registered successfully with the Local PV LVM CSI Driver by checking the CSI node object yaml: ```yaml $ kubectl get csinodes pawan-node-1 -oyaml @@ -136,7 +136,7 @@ spec: - openebs.io/rack ``` -We can see that "openebs.io/rack" is listed as topology key. Now we can create a storageclass with the topology key created : +We can see that "openebs.io/rack" is listed as topology key. Now we can create a storageclass with the topology key created: ```yaml apiVersion: storage.k8s.io/v1 @@ -237,7 +237,7 @@ spec: To add custom topology key: * Label the nodes with the required key and value. -* Set env variables in the ZFS driver daemonset yaml(openebs-zfs-node), if already deployed, you can edit the daemonSet directly. By default the env is set to `All` which will take the node label keys as allowed topologies. +* Set env variables in the ZFS driver daemonset yaml (openebs-zfs-node), if already deployed, you can edit the daemonSet directly. By default the env is set to `All` which will take the node label keys as allowed topologies. * "openebs.io/nodename" and "openebs.io/nodeid" are added as default topology key. * Create storageclass with above specific labels keys. @@ -268,7 +268,7 @@ env: ``` It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node. -Once we have labeled the node, we can install the zfs driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the ZFS-LocalPV CSI driver daemon sets (openebs-zfs-node). +Once we have labeled the node, we can install the zfs driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LocalPV ZFS CSI driver daemon sets (openebs-zfs-node). ```sh $ kubectl get pods -n kube-system -l role=openebs-zfs @@ -346,7 +346,7 @@ The driver uses below logic to roundoff the capacity: allocated = ((size + 1Gi - 1) / Gi) * Gi -For example if the PVC is requesting 4G storage space :- +For example if the PVC is requesting 4G storage space: ``` kind: PersistentVolumeClaim @@ -368,7 +368,7 @@ Then driver will find the nearest size in Gi, the size allocated will be ((4G + allocated = ((size + 1Mi - 1) / Mi) * Mi -For example if the PVC is requesting 1G (1000 * 1000 * 1000) storage space which is less than 1Gi (1024 * 1024 * 1024):- +For example if the PVC is requesting 1G (1000 * 1000 * 1000) storage space which is less than 1Gi (1024 * 1024 * 1024): ``` kind: PersistentVolumeClaim @@ -386,7 +386,7 @@ spec: Then driver will find the nearest size in Mi, the size allocated will be ((1G + 1Mi - 1) / Mi) * Mi, which will be 954Mi. -PVC size as zero in not a valid capacity. The minimum allocatable size for the ZFS-LocalPV driver is 1Mi, which means that if we are requesting 1 byte of storage space then 1Mi will be allocated for the volume. +PVC size as zero in not a valid capacity. The minimum allocatable size for the Local PV ZFS driver is 1Mi, which means that if we are requesting 1 byte of storage space then 1Mi will be allocated for the volume. [Go to top](#top) @@ -394,12 +394,12 @@ PVC size as zero in not a valid capacity. The minimum allocatable size for the Z The Local PV ZFS driver will set affinity on the PV to make the volume stick to the node so that pod gets scheduled to that node only where the volume is present. Now, the problem here is, when that node is not accesible due to some reason and we move the disks to a new node and import the pool there, the pods will not be scheduled to this node as k8s scheduler will be looking for that node only to schedule the pod. -From release 1.7.0 of the Local PV ZFS, the driver has the ability to use the user defined affinity for creating the PV. While deploying the ZFS-LocalPV driver, first we should label all the nodes using the key `openebs.io/nodeid` with some unique value. +From release 1.7.0 of the Local PV ZFS, the driver has the ability to use the user defined affinity for creating the PV. While deploying the Local PV ZFS driver, first we should label all the nodes using the key `openebs.io/nodeid` with some unique value. ``` $ kubectl label node node-1 openebs.io/nodeid=custom-value-1 ``` -In the above command, we have labelled the node `node-1` using the key `openebs.io/nodeid` and the value we have used here is `custom-value-1`. You can pick your own value, just make sure that the value is unique for all the nodes. We have to label all the nodes in the cluster with the unique value. For example, `node-2` and `node-3` can be labelled as below: +In the above command, we have labeled the node `node-1` using the key `openebs.io/nodeid` and the value we have used here is `custom-value-1`. You can pick your own value, just make sure that the value is unique for all the nodes. We have to label all the nodes in the cluster with the unique value. For example, `node-2` and `node-3` can be labeled as below: ``` $ kubectl label node node-2 openebs.io/nodeid=custom-value-2 @@ -408,13 +408,13 @@ $ kubectl label node node-3 openebs.io/nodeid=custom-value-3 Now, the Driver will use `openebs.io/nodeid` as the key and the corresponding value to set the affinity on the PV and k8s scheduler will consider this affinity label while scheduling the pods. -Now, when a node is not accesible, we need to do below steps +When a node is not accesible, follow the steps below: -1. remove the old node from the cluster or we can just remove the above node label from the node which we want to remove. -2. add a new node in the cluster -3. move the disks to this new node -4. import the zfs pools on the new nodes -5. label the new node with same key and value. For example, if we have removed the node-3 from the cluster and added node-4 as new node, we have to label the node `node-4` and set the value to `custom-value-3` as shown below: +1. Remove the old node from the cluster or we can just remove the above node label from the node which we want to remove. +2. Add a new node in the cluster +3. Move the disks to this new node +4. Import the zfs pools on the new nodes +5. Label the new node with same key and value. For example, if we have removed the node-3 from the cluster and added node-4 as new node, we have to label the node `node-4` and set the value to `custom-value-3` as shown below: ``` $ kubectl label node node-4 openebs.io/nodeid=custom-value-3 @@ -424,9 +424,9 @@ Once the above steps are done, the pod should be able to run on this new node wi [Go to top](#top) -### How is data protected in Replicated Storage (a.k.a Replicated Engine or Mayastor)? What happens when a host, client workload, or a data center fails? +### How is data protected in Replicated Storage? What happens when a host, client workload, or a data center fails? -The OpenEBS Replicated Storage ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure. +The OpenEBS Replicated Storage (a.k.a Replicated Engine or Mayastor) ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure. Faulted replicas are automatically rebuilt in the background without IO disruption to maintain the replication factor. [Go to top](#top) @@ -508,9 +508,9 @@ Since the replicas \(data copies\) of replicated volumes are held entirely withi The size of a Replicated Storage pool is fixed at the time of creation and is immutable. A single pool may have only one block device as a member. These constraints may be removed in later versions. -### How can I ensure that replicas aren't scheduled onto the same node? How about onto nodes in the same rack/availability zone? +### How can I ensure that replicas are not scheduled onto the same node? How about onto nodes in the same rack/availability zone? -The replica placement logic of Replicated Storage's control plane doesn't permit replicas of the same volume to be placed onto the same node, even if it were to be within different Disk Pools. For example, if a volume with replication factor 3 is to be provisioned, then there must be three healthy Disk Pools available, each with sufficient free capacity and each located on its own replicated node. Further enhancements to topology awareness are under consideration by the maintainers. +The replica placement logic of Replicated Storage's control plane does not permit replicas of the same volume to be placed onto the same node, even if it were to be within different Disk Pools. For example, if a volume with replication factor 3 is to be provisioned, then there must be three healthy Disk Pools available, each with sufficient free capacity and each located on its own replicated node. Further enhancements to topology awareness are under consideration by the maintainers. [Go to top](#top) diff --git a/docs/main/glossary.md b/docs/main/glossary.md new file mode 100644 index 00000000..cb86b201 --- /dev/null +++ b/docs/main/glossary.md @@ -0,0 +1,41 @@ +--- +id: glossary +title: Glossary of Terms +keywords: + - Community + - OpenEBS community +description: This section lists the abbreviations used thorughout the OpenEBS documentation +--- + +| Abbreviations | Definition | +| :--- | :--- | +| AKS | Azure Kubernetes Service | +| CLI | Command Line Interface | +| CNCF | Cloud Native Computing Foundation | +| CNS | Container Native Storage | +| COS | Container Orchestration Systems | +| COW | Copy-On-Write | +| CR | Custom Resource | +| CRDs | Custom Resource Definitions | +| CSI | Container Storage Interface | +| EKS | Elastic Kubernetes Service | +| FIO | Flexible IO Tester | +| FSB | File System Backup | +| GCS | Google Cloud Storage | +| GKE | Google Kubernetes Engine | +| HA | High Availability | +| LVM | Logical Volume Management | +| NATS | Neural Autonomic Transport System | +| NFS | Network File System | +| NVMe | Non-Volatile Memory Express | +| NVMe-oF | Non-Volatile Memory Express over Fabrics | +| OpenEBS | Open Elastic Block Store | +| PV | Persistent Volume | +| PVC | Persistent Volume Claim | +| RBAC | Role-Based Access Control | +| SPDK | Storage Performance Development Kit | +| SRE | Site Reliability Engineering | +| TCP | Transmission Control Protocol | +| VG | Volume Group | +| YAML | Yet Another Markup Language | +| ZFS | Zettabyte File System | diff --git a/docs/main/introduction-to-openebs/features.mdx b/docs/main/introduction-to-openebs/features.mdx index 3cbc1802..e612af3e 100644 --- a/docs/main/introduction-to-openebs/features.mdx +++ b/docs/main/introduction-to-openebs/features.mdx @@ -65,7 +65,7 @@ OpenEBS Features, like any storage solution, can be broadly classified into the

- The backup and restore of OpenEBS volumes works with Kubernetes backup and restore solutions such as Velero (formerly Heptio Ark) via open source OpenEBS Velero-plugins. Data backup to object storage targets such as AWS S3, GCP Object Storage or MinIO are frequently deployed using the OpenEBS incremental snapshot capability. This storage level snapshot and backup saves a significant amount of bandwidth and storage space as only incremental data is used for backup. + The backup and restore of OpenEBS volumes works with Kubernetes backup and restore solutions such as Velero via open source OpenEBS Velero-plugins. Data backup to object storage targets such as AWS S3, GCP Object Storage or MinIO are frequently deployed using the OpenEBS incremental snapshot capability. This storage level snapshot and backup saves a significant amount of bandwidth and storage space as only incremental data is used for backup.

![Backup and Restore Icon](../assets/f-backup.svg) diff --git a/docs/main/introduction-to-openebs/introduction-to-openebs.md b/docs/main/introduction-to-openebs/introduction-to-openebs.md index 4c6c81ce..b18d6751 100644 --- a/docs/main/introduction-to-openebs/introduction-to-openebs.md +++ b/docs/main/introduction-to-openebs/introduction-to-openebs.md @@ -22,7 +22,7 @@ The [OpenEBS Adoption stories](https://github.com/openebs/openebs/blob/master/AD - OpenEBS provides consistency across all Kubernetes distributions - On-premise and Cloud. - OpenEBS with Kubernetes increases Developer and Platform SRE Productivity. -- OpenEBS is Easy to use compared to other solutions, for eg trivial to install & enabling entirely dynamic provisioning. +- OpenEBS scores in its ease of use over other solutions. It is trivial to setup, install and configure. - OpenEBS has Excellent Community Support. - OpenEBS is completely Open Source and Free. diff --git a/docs/main/quickstart-guide/deploy-a-test-application.md b/docs/main/quickstart-guide/deploy-a-test-application.md index f1239d81..4602dfec 100644 --- a/docs/main/quickstart-guide/deploy-a-test-application.md +++ b/docs/main/quickstart-guide/deploy-a-test-application.md @@ -9,8 +9,8 @@ description: This section will help you to deploy a test application. --- :::info -- See [Local PV LVM User Guide](../user-guides/local-storage-user-guide/lvm-localpv.md) to deploy Local PV LVM. -- See [Local PV ZFS User Guide](../user-guides/local-storage-user-guide/zfs-localpv.md) to deploy Local PV ZFS. +- See [Local PV LVM Deployment](../user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md) to deploy Local PV LVM. +- See [Local PV ZFS Deployment](../user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md) to deploy Local PV ZFS. - See [Replicated Storage Deployment](../user-guides/replicated-storage-user-guide/rs-deployment.md) to deploy Replicated Storage (a.k.a Replicated Engine or Mayastor). ::: diff --git a/docs/main/quickstart-guide/installation.md b/docs/main/quickstart-guide/installation.md index fee7c729..efb2f487 100644 --- a/docs/main/quickstart-guide/installation.md +++ b/docs/main/quickstart-guide/installation.md @@ -28,10 +28,10 @@ For OpenEBS Replicated Storage (a.k.a Replicated Engine or Mayastor), make sure At a high-level, OpenEBS requires: -- Verify that you have the admin context. If you do not have admin permissions to your cluster, check with your Kubernetes cluster administrator to help with installing OpenEBS or if you are the owner of the cluster, check out the [steps to create a new admin context](#set-cluster-admin-user-context) and use it for installing OpenEBS. +- Verify that you have the admin context. If you do not have admin permissions to your cluster, check with your Kubernetes cluster administrator to help with installing OpenEBS or if you are the owner of the cluster, check out the [steps to create a new admin context](../troubleshooting/troubleshooting-local-storage.md#set-cluster-admin-user-context) and use it for installing OpenEBS. - Each storage engine may have a few additional requirements as follows: - - Depending on the managed Kubernetes platform like Rancher or MicroK8s - set up the right bind mounts - - Decide which of the devices on the nodes should be used by OpenEBS or if you need to create LVM Volume Groups or ZFS Pools + - Depending on the managed Kubernetes platform like Rancher or MicroK8s - set up the right bind mounts. + - Decide which of the devices on the nodes should be used by OpenEBS or if you need to create LVM Volume Groups or ZFS Pools. ## Supported Versions @@ -45,20 +45,20 @@ At a high-level, OpenEBS requires: Verify helm is installed and helm repo is updated. You need helm 3.2 or more. -Setup helm repository +1. Setup helm repository. ``` helm repo add openebs https://openebs.github.io/openebs helm repo update ``` -OpenEBS provides several options that you can customize during installation like: -- specifying the directory where hostpath volume data is stored or -- specifying the nodes on which OpenEBS components should be deployed and so forth. +OpenEBS provides several options to customize during installation such as: +- Specifying the directory where hostpath volume data is stored or +- Specifying the nodes on which OpenEBS components should be deployed and so forth. The default OpenEBS helm chart will install both Local Storage and Replicated Storage. Refer to [OpenEBS helm chart documentation](https://github.com/openebs/charts/tree/master/charts/openebs) for a full list of customizable options and use other flavors of OpenEBS Data Engines by setting the correct helm values. -Install the OpenEBS helm chart with default values. +2. Install the OpenEBS helm chart with default values. ``` helm install openebs --namespace openebs openebs/openebs --create-namespace @@ -73,7 +73,7 @@ If you do not want to install OpenEBS Replicated Storage, use the following comm helm install openebs --namespace openebs openebs/openebs --set mayastor.enabled=false --create-namespace ``` -To view the chart and get the output, use the following commands: +3. To view the chart and get the output, use the following commands: **Command** @@ -89,7 +89,7 @@ openebs openebs 1 2024-03-25 09:13:00.903321318 +0000 UTC deploye ``` ::: -As a next step [verify](#verifying-openebs-installation) your installation and do the[post installation](#post-installation-considerations) steps. +As a next step [verify](#verifying-openebs-installation) your installation and do the [post installation](#post-installation-considerations) steps. ## Verifying OpenEBS Installation diff --git a/docs/main/releases.md b/docs/main/releases.md index 97451ef0..b1fa71db 100644 --- a/docs/main/releases.md +++ b/docs/main/releases.md @@ -38,11 +38,10 @@ See [here](../versioned_docs/version-3.10.x/introduction/releases.md) for legacy ## See Also -- [Quickstart](../quickstart-guide/installation.md) -- [Installation](../quickstart-guide/installation.md) -- [Deployment](../quickstart-guide/deploy-a-test-application.md) -- [OpenEBS Architecture](../concepts/architecture.md) -- [OpenEBS Local Storage](../concepts/data-engines/local-storage.md) -- [OpenEBS Replicated Storage](../concepts/data-engines/replicated-storage.md) +- [Quickstart](../main/quickstart-guide/installation.md) +- [Deployment](../main/quickstart-guide/deploy-a-test-application.md) +- [OpenEBS Architecture](../main/concepts/architecture.md) +- [OpenEBS Local Storage](../main/concepts/data-engines/local-storage.md) +- [OpenEBS Replicated Storage](../main/concepts/data-engines/replicated-storage.md) - [Community](community.md) - [Commercial Support](commercial-support.md) \ No newline at end of file diff --git a/docs/main/troubleshooting/troubleshooting-local-storage.md b/docs/main/troubleshooting/troubleshooting-local-storage.md index 0ab116f5..00de4d14 100644 --- a/docs/main/troubleshooting/troubleshooting-local-storage.md +++ b/docs/main/troubleshooting/troubleshooting-local-storage.md @@ -20,96 +20,6 @@ The default localpv storage classes from openebs have `volumeBindingMode: WaitFo **Resolution:** Deploy an application that uses the PVC and the PV will be created and application will start using the PV. -### Stale BDC in pending state after PVC is deleted {#stale-bdc-after-pvc-deletion} - -``` -kubectl get bdc -n openebs -``` - -shows stale `Pending` BDCs created by localpv provisioner, even after the corresponding PVC has been deleted. - -**Resolution:** -LocalPV provisioner currently does not delete BDCs in Pending state if the corresponding PVCs are deleted. To remove the stale BDC entries, - -1. Edit the BDC and remove the `- local.openebs.io/finalizer` finalizer - -``` -kubectl edit bdc -n openebs -``` - -2. Delete the BDC - -``` -kubectl delete bdc -n openebs -``` - -### BDC created by localPV in pending state {#bdc-by-localpv-pending-state} - -The BDC created by localpv provisioner (bdc-pvc-xxxx) remains in pending state and PVC does not get Bound - -**Troubleshooting:** -Describe the BDC to check the events recorded on the resource - -``` -kubectl describe bdc bdc-pvc-xxxx -n openebs -``` - -The following are different types of messages shown when the node on which localpv application pod is scheduled, does not have a blockdevice available. - -1. No blockdevices found - -```shell hideCopy -Warning SelectionFailed 14m (x25 over 16m) blockdeviceclaim-operator no blockdevices found -``` - -It means that there were no matching blockdevices after listing based on the labels. Check if there is any `block-device-tag` on the storage class and corresponding tags are available on the blockdevices also - -2. No devices with matching criteria - -```shell hideCopy -Warning SelectionFailed 6m25s (x18 over 11m) blockdeviceclaim-operator no devices found matching the criteria -``` - -It means that the there are no devices for claiming after filtering based on filesystem type and node name. Make sure the blockdevices on the node -have the correct filesystem as mentioned in the storage class (default is `ext4`) - -3. No devices with matching resource requirements - -```shell hideCopy -Warning SelectionFailed 85s (x74 over 11m) blockdeviceclaim-operator could not find a device with matching resource requirements -``` - -It means that there are no devices available on the node with a matching capacity requirement. - -**Resolution** - -To schedule the application pod to a node, which has the blockdevices available, a node selector can be used on the application pod. Here the node with hostname `svc1` has blockdevices available, so a node selector is used to schedule the pod to that node. - -Example: - -``` -apiVersion: v1 -kind: Pod -metadata: - name: pod1 -spec: - volumes: - - name: local-storage - persistentVolumeClaim: - claimName: pvc1 - containers: - - name: hello-container - image: busybox - command: - - sh - - -c - - 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done' - volumeMounts: - - mountPath: /mnt/store - name: local-storage - nodeSelector: - kubernetes.io/hostname: svc1 -``` Installation Related @@ -328,13 +238,13 @@ This is the preferred approach. Approach2: -Set the reboot timer schedule at different time i.e staggered at various interval of the day, so that only one nodes get rebooted at a time. +Set the reboot timer schedule at different time i.e. staggered at various interval of the day, so that only one nodes get rebooted at a time. ### How to fetch the OpenEBS Dynamic Local Provisioner logs? **Workaround:** -Review the logs of the OpenEBS Local PV provisioner. OpenEBS Dynamic Local Provisioner logs can be fetched using. +Review the logs of the OpenEBS Local PV provisioner. OpenEBS Dynamic Local Provisioner logs can be fetched using: ``` kubectl logs -n openebs -l openebs.io/component-name=openebs-localpv-provisioner diff --git a/docs/main/user-guides/data-migration/migration-using-pv-migrate.md b/docs/main/user-guides/data-migration/migration-using-pv-migrate.md index 047adb65..2e93dea9 100644 --- a/docs/main/user-guides/data-migration/migration-using-pv-migrate.md +++ b/docs/main/user-guides/data-migration/migration-using-pv-migrate.md @@ -224,7 +224,7 @@ db.admin.insertMany([{name: "Max"}, {name:"Alex"}]) ``` ### Steps to migrate cStor to Replicated -Follow the steps below to migrate OpenEBS cStor to OpenEBS Replicated (fka Mayastor). +Follow the steps below to migrate OpenEBS cStor to OpenEBS Replicated (a.k.a Replicated Engine or Mayastor). 1. [Install Replicated Storage](../../quickstart-guide/installation.md) on your cluster. diff --git a/docs/main/user-guides/data-migration/migration-using-velero/migration-for-distributed-db/distributeddb-restore.md b/docs/main/user-guides/data-migration/migration-using-velero/migration-for-distributed-db/distributeddb-restore.md index 778ff8e1..c5f0e50c 100644 --- a/docs/main/user-guides/data-migration/migration-using-velero/migration-for-distributed-db/distributeddb-restore.md +++ b/docs/main/user-guides/data-migration/migration-using-velero/migration-for-distributed-db/distributeddb-restore.md @@ -6,9 +6,9 @@ keywords: - Restoring to Replicated Storage description: This section explains how to Restore from cStor Backup to Replicated Storage for Distributed DBs. --- -## Steps to Restore from cStor Backup to Replicated Storage (a.k.a Replicated Engine or Mayastor) for Distributed DBs (Cassandra) +## Steps to Restore from cStor Backup to Replicated Storage for Distributed DBs (Cassandra) -Cassandra is a popular NoSQL database used for handling large amounts of data with high availability and scalability. In Kubernetes environments, managing and restoring Cassandra backups efficiently is crucial. In this article, we will walk you through the process of restoring a Cassandra database in a Kubernetes cluster using Velero, and we will change the storage class to Replicated Storage for improved performance. +Cassandra is a popular NoSQL database used for handling large amounts of data with high availability and scalability. In Kubernetes environments, managing and restoring Cassandra backups efficiently is crucial. In this article, we will walk you through the process of restoring a Cassandra database in a Kubernetes cluster using Velero, and we will change the storage class to Replicated Storage (a.k.a Replicated Engine or Mayastor) for improved performance. :::info Before you begin, make sure you have the following: diff --git a/docs/main/user-guides/data-migration/migration-using-velero/migration-for-replicated-db/replicateddb-backup.md b/docs/main/user-guides/data-migration/migration-using-velero/migration-for-replicated-db/replicateddb-backup.md index 7f88a844..152e31b5 100644 --- a/docs/main/user-guides/data-migration/migration-using-velero/migration-for-replicated-db/replicateddb-backup.md +++ b/docs/main/user-guides/data-migration/migration-using-velero/migration-for-replicated-db/replicateddb-backup.md @@ -65,7 +65,7 @@ pvc-fc1f7ed7-d99e-40c7-a9b7-8d6244403a3e 3Gi Bound 50m ## Step 2: Install Velero :::info -For the prerequisites, see to the [overview](replicateddb-overview.md) section. +For the prerequisites, see to the [overview](../overview.md) section. ::: Run the following command to install Velero: diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-configuration.md b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-configuration.md index 1689d3fe..b7dbf5ab 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-configuration.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-configuration.md @@ -64,7 +64,7 @@ The default Storage Class is called `openebs-hostpath` and its `BasePath` is con 2. Edit `local-hostpath-sc.yaml` and update with your desired values for `metadata.name` and `cas.openebs.io/config.BasePath`. :::note - If the `BasePath` does not exist on the node, *OpenEBS Dynamic Local PV Provisioner* will attempt to create the directory, when the first Local Volume is scheduled on to that node. You MUST ensure that the value provided for `BasePath` is a valid absolute path. + If the `BasePath` does not exist on the node, *OpenEBS Dynamic Local PV Provisioner* will attempt to create the directory, when the first Local Volume is scheduled on to that node. You must ensure that the value provided for `BasePath` is a valid absolute path. ::: 3. Create OpenEBS Local PV Hostpath Storage Class. @@ -79,7 +79,7 @@ The default Storage Class is called `openebs-hostpath` and its `BasePath` is con ## Support -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). +If you encounter issues or have a question, file a [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). ## See Also diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md index cbd75d89..8e84bf62 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md @@ -29,7 +29,7 @@ kubectl get pv ## Support -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). +If you encounter issues or have a question, file a [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). ## See Also diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md index 0bff9b21..c4ca89c2 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md @@ -18,7 +18,7 @@ This section explains the prerequisites and installation requirements to set up - Data protection using the Velero Backup and Restore. - Protect against hostpath security vulnerabilities by masking the hostpath completely from the application YAML and pod. -OpenEBS Local PV uses volume topology aware pod scheduling enhancements introduced by [Kubernetes Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) +OpenEBS Local PV uses volume topology aware pod scheduling enhancements introduced by [Kubernetes Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). ## Prerequisites @@ -49,11 +49,11 @@ services: ## Installation -For installation instructions, see [here](../../quickstart-guide/installation.md). +For installation instructions, see [here](../../../quickstart-guide/installation.md). ## Support -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). +If you encounter issues or have a question, file a [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). ## See Also diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-resize.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-resize.md index 662644db..3d37985e 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-resize.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-resize.md @@ -12,10 +12,10 @@ description: This section talks about the advanced operations that can be perfor We can resize the volume by updating the PVC yaml to the desired size and applying it. The LVM Driver will take care of expanding the volume via lvextend command using "-r" option which will resize the filesystem. :::note -Online Volume Expansion for `Block` mode and `btrfs` Filesystem mode is supported only from **K8s 1.19+** version +Online Volume Expansion for `Block` mode and `btrfs` Filesystem mode is supported only from **K8s 1.23+** version. ::: -For resize, storageclass that provisions the PVC must support resize. We should have allowVolumeExpansion as true in storageclass +For resize, storageclass that provisions the PVC must support resize. We should have allowVolumeExpansion as true in storageclass. ``` $ cat sc.yaml @@ -35,7 +35,7 @@ $ kubectl apply -f sc.yaml storageclass.storage.k8s.io/openebs-lvmpv created ``` -Create the PVC using the above storage class +Create the PVC using the above storage class. ``` $ cat pvc.yaml diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md index a8d9fe00..a23123d4 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/advanced-operations/lvm-snapshot.md @@ -11,8 +11,8 @@ description: This section talks about the advanced operations that can be perfor The LVM driver supports creating snapshots of the LVM volumes. This requires the `dm-snapshot` kernel module to be loaded. Certain settings are applied by the LVM driver which modifies the default behaviour of the LVM snapshot: -- Snapshots created by the LVM driver are ReadOnly by default as opposed to the ReadWrite snapshots created by default by `lvcreate` command -- The size of snapshot will be set to the size of the origin volume +- Snapshots created by the LVM driver are ReadOnly by default as opposed to the ReadWrite snapshots created by default by `lvcreate` command. +- The size of snapshot will be set to the size of the origin volume. ## Default SnapshotClass without SnapSize Parameter @@ -72,20 +72,23 @@ parameters: A SnapshotClass needs to be created. A sample SnapshotClass can be found [here](https://github.com/openebs/lvm-localpv/blob/HEAD/deploy/sample/lvmsnapclass.yaml). -1. Apply the SnapshotClass YAML: +1. Apply the SnapshotClass YAML. + ```bash $ kubectl apply -f snapshotclass.yaml volumesnapshotclass.snapshot.storage.k8s.io/lvmpv-snapclass created ``` 2. Find a PVC for which snapshot has to be created. + ```bash $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-lvmpvc Bound pvc-c7f42430-f2bb-4459-9182-f76b8896c532 4Gi RWO openebs-lvmsc 53s ``` -3. Create the snapshot using the created SnapshotClass for the selected PVC +3. Create the snapshot using the created SnapshotClass for the selected PVC. + ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot @@ -98,6 +101,7 @@ spec: ``` 4. Apply the Snapshot YAML. + ```bash $ kubectl apply -f lvmsnapshot.yaml volumesnapshot.snapshot.storage.k8s.io/lvm-localpv-snap created @@ -113,6 +117,7 @@ lvm-localpv-snap true csi-lvmpvc 0 ::: 5. Check the OpenEBS resource for the created snapshot and make sure the status is `Ready`. + ```bash $ kubectl get lvmsnapshot -n openebs NAME AGE @@ -145,7 +150,8 @@ status: state: Ready ``` -6. To confirm that snapshot has been created, ssh into the node and check for lvm volumes. +6. To confirm that snapshot has been created, ssh into the node and check for LVM volumes. + ```bash $ lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md index 5371cbce..aa5b2792 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md @@ -486,7 +486,7 @@ spec: ## Support -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). +If you encounter issues or have a question, file a [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). ## See Also diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md index 5f76cacf..c911f040 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md @@ -38,25 +38,80 @@ spec: After the deployment of the application, we can go to the node and see that the LVM volume is being used by the application for reading/writing the data and space is consumed from the LVM. - :::note - Check the provisioned volumes on the node, we need to run pvscan --cache command to update the LVM cache and then we can use lvdisplay and all other LVM commands on the node. +:::note +Check the provisioned volumes on the node, we need to run pvscan --cache command to update the LVM cache and then we can use lvdisplay and all other LVM commands on the node. ::: + ## PersistentVolumeClaim Conformance Matrix The following matrix shows supported PersistentVolumeClaim parameters for localpv-lvm. - Parameter Values Development Status E2E Coverage Status - AccessMode ReadWriteOnce Supported Yes - ReadWriteMany Not Supported - ReadOnlyMany Not Supported - Storageclass StorageClassName Supported Yes - Capacity Resource Number along with size unit Supported Yes - VolumeMode Block Supported Yes - *Test cases available for Filesystem mode* - Filesystem Supported - Selectors Equality & Set based selections Supported Pending - VolumeName Available PV name Supported Pending - DataSource - Not Supported Pending + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Parameter Values Development Status E2E Coverage Status
AccessMode ReadWriteOnce Supported Yes
ReadWriteMany Not Supported
ReadOnlyMany Not Supported
Storageclass StorageClassName Supported Yes
Capacity Resource Number along with size unit Supported Yes
VolumeMode Block Supported Yes
Test cases available for Filesystem mode
Filesystem Supported
Selectors Equality & Set based selections Supported Pending
VolumeName Available PV name Supported Pending
DataSource - Not Supported Pending
## PersistentVolumeClaim Parameters @@ -98,7 +153,7 @@ spec: **Capacity Resource** -Admin/User can specify the desired capacity for LVM volume. CSI-Driver will provision a volume if the underlying volume group has requested capacity available else provisioning volume will be errored. StorageClassName is a required field, if the field is unspecified then it will lead to provisioning errors. See [here]https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/capacity_resource.md for more information about the workflows. +Admin/User can specify the desired capacity for LVM volume. CSI-Driver will provision a volume if the underlying volume group has requested capacity available else provisioning volume will be errored. StorageClassName is a required field, if the field is unspecified then it will lead to provisioning errors. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/capacity_resource.md) for more information about the workflows. ``` kind: PersistentVolumeClaim diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md index 2a9ad13e..92accb4a 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md @@ -20,14 +20,14 @@ Before installing the LVM driver, make sure your Kubernetes Cluster must meet th ## Setup Volume Group -Find the disk that you want to use for the LVM, for testing you can use the loopback device +Find the disk that you want to use for the LVM, for testing you can use the loopback device. ``` truncate -s 1024G /tmp/disk.img sudo losetup -f /tmp/disk.img --show ``` -Create the Volume group on all the nodes, which will be used by the LVM Driver for provisioning the volumes +Create the Volume group on all the nodes, which will be used by the LVM Driver for provisioning the volumes. ``` sudo pvcreate /dev/loop0 @@ -36,13 +36,13 @@ sudo vgcreate lvmvg /dev/loop0 ## here lvmvg is the volume group name to b ## Installation -For installation instructions, see [here](../../quickstart-guide/installation.md). +For installation instructions, see [here](../../../quickstart-guide/installation.md). ## Support -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). +If you encounter issues or have a question, file a [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). ## See Also -- [Installation](../../quickstart-guide/installation.md) -- [Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file +- [Installation](../../../quickstart-guide/installation.md) +- [Deploy an Application](../../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md index 3825f108..125a445e 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md @@ -2,8 +2,8 @@ id: zfs-backup-restore title: Backup and Restore keywords: - - OpenEBS ZFS Local PV - - ZFS Local PV + - OpenEBS Local PV ZFS + - Local PV ZFS - Backup and Restore - Backup - Restore @@ -12,16 +12,16 @@ description: This section talks about the advanced operations that can be perfor ## Prerequisites -You should have installed the ZFS-LocalPV 1.0.0 or later version for the Backup and Restore, see [readme](../README.md) for the steps to install the ZFS-LocalPV driver. +You should have installed the Local PV ZFS 1.0.0 or later version for the Backup and Restore, see [here](https://github.com/openebs/zfs-localpv/blob/develop/README.md) for the steps to install the Local PV ZFS driver. | Project | Minimum Version | | :--- | :--- | -| ZFS-LocalPV | 1.0.0+ | +| Local PV ZFS | 1.0.0+ | | Velero | 1.5+ | | Velero-Plugin | 2.2.0+ | :::note -- To work with velero-plugin version 2.7.0 (adding support for restore on encrypted zpools) and above we have to update zfs-localpv driver version to at least 1.5.0. +- To work with velero-plugin version 2.7.0 (adding support for restore on encrypted zpools) and above we have to update Local PV ZFS driver version to at least 1.5.0. - With velero version v1.5.2 and v1.5.3, there is an [issue](https://github.com/vmware-tanzu/velero/issues/3470) where PVs are not getting cleaned up for restored volume. ::: @@ -56,7 +56,7 @@ If you want to use cloud storage like AWS-S3 buckets for storing backups, use a velero install --provider aws --bucket --secret-file <./aws-iam-creds> --plugins velero/velero-plugin-for-aws:v1.0.0-beta.1 --backup-location-config region=,s3ForcePathStyle="true" --use-volume-snapshots=true --use-restic ``` -Install the velero 1.5 or later version for ZFS-LocalPV. +Install the velero 1.5 or later version for Local PV ZFS. ### Deploy MinIO @@ -89,7 +89,7 @@ Install the Velero Plugin for Local PV ZFS using the command below: velero plugin add openebs/velero-plugin:2.2.0 ``` -Install the velero-plugin 2.2.0 or later version which has the support for ZFS-LocalPV. Once the setup is done, create the backup/restore. +Install the velero-plugin 2.2.0 or later version which has the support for Local PV ZFS. Once the setup is done, create the backup/restore. ## Create Backup @@ -117,13 +117,13 @@ spec: s3Url: http://minio.velero.svc:9000 ``` -The volume snapshot location has the information about where the snapshot should be stored. Here we have to provide the namespace which we have used as OPENEBS_NAMESPACE env while deploying the ZFS-LocalPV. The ZFS-LocalPV Operator yamls uses "openebs" as default value for OPENEBS_NAMESPACE env. Verify the volumesnapshot location: +The volume snapshot location has the information about where the snapshot should be stored. Here we have to provide the namespace which we have used as OPENEBS_NAMESPACE env while deploying the Local PV ZFS. The Local PV ZFS Operator yamls uses "openebs" as default value for OPENEBS_NAMESPACE env. Verify the volumesnapshot location: ``` kubectl get volumesnapshotlocations.velero.io -n velero ``` -Now, we can execute velero backup command using the above VolumeSnapshotLocation and the ZFS-LocalPV plugin will take the full backup. We can use the below velero command to create the full backup, we can add all the namespaces we want to be backed up in a comma separated format in --include-namespaces parameter. +Now, we can execute velero backup command using the above VolumeSnapshotLocation and the Local PV ZFS plugin will take the full backup. We can use the below velero command to create the full backup, we can add all the namespaces we want to be backed up in a comma separated format in --include-namespaces parameter. ``` velero backup create my-backup --snapshot-volumes --include-namespaces= --volume-snapshot-locations=zfspv-full --storage-location=default @@ -167,7 +167,7 @@ Update the above VolumeSnapshotLocation with namespace and other fields accordin kubectl get volumesnapshotlocations.velero.io -n velero ``` -Now, we can create a backup schedule using the above VolumeSnapshotLocation and the ZFS-LocalPV plugin will take the full backup of the resources periodically. For example, to take the full backup at every 5 min, we can create the below schedule : +Now, we can create a backup schedule using the above VolumeSnapshotLocation and the Local PV ZFS plugin will take the full backup of the resources periodically. For example, to take the full backup at every 5 min, we can create the below schedule : ``` velero create schedule schd --schedule="*/5 * * * *" --snapshot-volumes --include-namespaces=, --volume-snapshot-locations=zfspv-full --storage-location=default @@ -215,13 +215,13 @@ Update the above VolumeSnapshotLocation with namespace and other fields accordin kubectl get volumesnapshotlocations.velero.io -n velero ``` -If we have created a backup schedule using the above VolumeSnapshotLocation, the ZFS-LocalPV plugin will start taking the incremental backups. Here, we have to provide `incrBackupCount` parameter which indicates that how many incremental backups we should keep before taking the next full backup. So, in the above case the ZFS-LocalPV plugin will create full backup first and then it will create three incremental backups and after that it will again create a full backup followed by three incremental backups and so on. +If we have created a backup schedule using the above VolumeSnapshotLocation, the Local PV ZFS plugin will start taking the incremental backups. Here, we have to provide `incrBackupCount` parameter which indicates that how many incremental backups we should keep before taking the next full backup. So, in the above case the Local PV ZFS plugin will create full backup first and then it will create three incremental backups and after that it will again create a full backup followed by three incremental backups and so on. For Restore, we need to have the full backup and all the in between the incremental backups available. All the incremental backups are linked to its previous backup, so this link should not be broken otherwise restore will fail. One thing to note here is `incrBackupCount` parameter defines how many incremental backups we want, it does not include the first full backup. While doing the restore, we just need to give the backup name which we want to restore. The plugin is capable of identifying the incremental backup group and will restore from the full backup and keep restoring the incremental backup till the backup name provided in the restore command. -Now we can create a backup schedule using the above VolumeSnapshotLocation and the ZFS-LocalPV plugin will take care of taking the backup of the resources periodically. For example, to take the incremental backup at every 5 min, we can create the below schedule : +Now we can create a backup schedule using the above VolumeSnapshotLocation and the Local PV ZFS plugin will take care of taking the backup of the resources periodically. For example, to take the incremental backup at every 5 min, we can create the below schedule : ``` velero create schedule schd --schedule="*/5 * * * *" --snapshot-volumes --include-namespaces=, --volume-snapshot-locations=zfspv-incr --storage-location=default --ttl 60m @@ -313,7 +313,7 @@ data: pawan-old-node2: pawan-new-node2 ``` -While doing the restore the ZFS-LocalPV plugin will set the affinity on the PV as per the node mapping provided in the config map. Here in the above case the PV created on nodes `pawan-old-node1` and `pawan-old-node2` will be moved to `pawan-new-node1` and `pawan-new-node2` respectively. +While doing the restore the Local PV ZFS plugin will set the affinity on the PV as per the node mapping provided in the config map. Here in the above case the PV created on nodes `pawan-old-node1` and `pawan-old-node2` will be moved to `pawan-new-node1` and `pawan-new-node2` respectively. ## Things to Consider diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md index b8c3d31f..5b6c79b6 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md @@ -125,7 +125,7 @@ The provisioner name for ZFS driver is "zfs.csi.openebs.io", we have to use this **Scheduler** -The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. To know about how to select scheduler via storage-class See [here](https://github.com/openebs/zfs-localpv/blob/HEAD/docs/storageclasses.md#storageclass-with-k8s-scheduler). Once it can find the node, it will create a PV for that node and also create a ZFSVolume custom resource for the volume with the NODE information. The watcher for this ZFSVolume CR will get all the information for this object and creates a ZFS dataset(zvol) with the given ZFS property on the mentioned node. +The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. See [here](https://github.com/openebs/zfs-localpv/blob/HEAD/docs/storageclasses.md#storageclass-with-k8s-scheduler) to know about how to select scheduler via storage-class . Once it can find the node, it will create a PV for that node and also create a ZFSVolume custom resource for the volume with the NODE information. The watcher for this ZFSVolume CR will get all the information for this object and creates a ZFS dataset (zvol) with the given ZFS property on the mentioned node. The scheduling algorithm currently only accounts for either the number of ZFS volumes or total capacity occupied from a zpool and does not account for other factors like available cpu or memory while making scheduling decisions. @@ -173,7 +173,7 @@ spec: storage: 4Gi ``` -Create a PVC using the storage class created for the ZFS driver. Here, the allocated volume size will be rounded off to the nearest Mi or Gi notation, see [FAQs](../../faqs/faqs.md) for more details. +Create a PVC using the storage class created for the ZFS driver. Here, the allocated volume size will be rounded off to the nearest Mi or Gi notation, see [FAQs](../../../faqs/faqs.md) for more details. If we are using the immediate binding in the storageclass then we can check the Kubernetes resource for the corresponding ZFS volume, otherwise in late binding case, we can check the same after pod has been scheduled: @@ -366,7 +366,7 @@ Here, we have to note that all the Pods using that volume will come to the same ### StorageClass with k8s Scheduler -The ZFS-LocalPV Driver has two types of its own scheduling logic, VolumeWeighted and CapacityWeighted (Supported from zfs-driver:1.3.0+). To choose any one of the scheduler add scheduler parameter in storage class and give its value accordingly. +The ZFS-LocalPV Driver has two types of its own scheduling logic, VolumeWeighted and CapacityWeighted (Supported from zfs-driver: 1.3.0+). To choose any one of the scheduler add scheduler parameter in storage class and give its value accordingly. ``` parameters: scheduler: "VolumeWeighted" @@ -415,7 +415,7 @@ allowedTopologies: - node-2 ``` -At the same time, you must set env variables in the ZFS-LocalPV CSI driver daemon sets (openebs-zfs-node) so that it can pick the node label as the supported topology. It adds "openebs.io/nodename" as default topology key. If the key doesn't exist in the node labels when the CSI ZFS driver register, the key will not add to the topologyKeys. Set more than one keys separated by commas. +At the same time, you must set env variables in the ZFS-LocalPV CSI driver daemon sets (openebs-zfs-node) so that it can pick the node label as the supported topology. It adds "openebs.io/nodename" as default topology key. If the key does not exist in the node labels when the CSI ZFS driver register, the key will not add to the topologyKeys. Set more than one keys separated by commas. ```yaml env: @@ -433,7 +433,7 @@ env: value: "test1,test2" ``` -We can verify that key has been registered successfully with the ZFS LocalPV CSI Driver by checking the CSI node object yaml :- +We can verify that key has been registered successfully with the ZFS LocalPV CSI Driver by checking the CSI node object yaml: ```yaml $ kubectl get csinodes pawan-node-1 -oyaml @@ -460,7 +460,7 @@ spec: - test2 ``` -If you want to change topology keys, just set new env(ALLOWED_TOPOLOGIES). See [faq](./faq.md#6-how-to-add-custom-topology-key) for more details. +If you want to change topology keys, just set new env(ALLOWED_TOPOLOGIES). See [FAQs](../../../faqs/faqs.md#how-to-add-custom-topology-key-to-local-pv-zfs-driver) for more details. ``` $ kubectl edit ds -n kube-system openebs-zfs-node diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md index dccbb757..dc4d8ad4 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md @@ -77,7 +77,7 @@ Always maintain upto date /etc/zfs/zpool.cache while performing operations on ZF ## Support -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). +If you encounter issues or have a question, file a [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). ## See Also diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md index 7011f55c..0a902439 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md @@ -64,7 +64,7 @@ Configure the [custom topology keys](../../../faqs/faqs.md#how-to-add-custom-top ## Installation -For installation instructions, see [here](../../quickstart-guide/installation.md). +For installation instructions, see [here](../../../quickstart-guide/installation.md). ## Support diff --git a/docs/main/user-guides/replicated-storage-user-guide/additional-information/resize.md b/docs/main/user-guides/replicated-storage-user-guide/additional-information/resize.md index 87857784..d9f8d6c0 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/additional-information/resize.md +++ b/docs/main/user-guides/replicated-storage-user-guide/additional-information/resize.md @@ -10,7 +10,7 @@ description: This guide explains about the volume resize feature. The Volume Resize feature allows Kubernetes (or any other CO) end-users to expand a persistent volume (PV) after creation by resizing a dynamically provisioned persistent volume claim (PVC). This is allowed only if the `StorageClass` has `allowVolumeExpansion` boolean flag set to _true_. The end-users can edit the `allowVolumeExpansion` boolean flag in Kubernetes `StorageClass` to toggle the permission for PVC resizing. This is useful for users to optimise their provisioned space and not have to worry about pre-planning the future capacity requirements. The users can provision a volume with just about right size based on current usage and trends, and in the future if the need arises to have more capacity in the same volume, the volume can be easily expanded. -Replicated Storage (a.k.a Replicated Enfine or Mayastor) CSI plugin provides the ability to expand volume in the _ONLINE_ and _OFFLINE_ states. +Replicated Storage (a.k.a Replicated Engine or Mayastor) CSI plugin provides the ability to expand volume in the _ONLINE_ and _OFFLINE_ states. ## Prerequisites diff --git a/docs/main/user-guides/replicated-storage-user-guide/additional-information/scale-etcd.md b/docs/main/user-guides/replicated-storage-user-guide/additional-information/scale-etcd.md index d8819943..c823cedd 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/additional-information/scale-etcd.md +++ b/docs/main/user-guides/replicated-storage-user-guide/additional-information/scale-etcd.md @@ -83,7 +83,7 @@ mayastor-etcd-3 0/1 Pending 0 2m34s ## Step 2: Add a New Peer URL -Before creating a PV, we need to add the new peer URL (mayastor-etcd-3=http://mayastor-etcd-3.mayastor-etcd-headless.mayastor.svc.cluster.local:2380) and change the cluster's initial state from "new" to "existing" so that the new member will be added to the existing cluster when the pod comes up after creating the PV. Since the new pod is still in a pending state, the changes will not be applied to the other pods as they will be restarted in reverse order from {N-1..0}. It is expected that all of its predecessors must be running and ready. +Before creating a PV, we need to add the new peer URL [mayastor-etcd-3=](http://mayastor-etcd-3.mayastor-etcd-headless.mayastor.svc.cluster.local:2380) and change the cluster's initial state from "new" to "existing" so that the new member will be added to the existing cluster when the pod comes up after creating the PV. Since the new pod is still in a pending state, the changes will not be applied to the other pods as they will be restarted in reverse order from {N-1..0}. It is expected that all of its predecessors must be running and ready. **Command** diff --git a/docs/main/user-guides/replicated-storage-user-guide/advanced-operations/kubectl-plugin.md b/docs/main/user-guides/replicated-storage-user-guide/advanced-operations/kubectl-plugin.md index 1c94bbfe..d0e4f5dc 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/advanced-operations/kubectl-plugin.md +++ b/docs/main/user-guides/replicated-storage-user-guide/advanced-operations/kubectl-plugin.md @@ -5,10 +5,11 @@ keywords: - Kubectl - Plugin - Kubectl Plugin + - Mayastor Kubectl Plugin - Replicated Storage kubectl plugin description: This guide will help you to view and manage Replicated Storage resources such as nodes, pools, and volumes. --- -# Mayastor kubectl plugin +# Kubectl Plugin The **Mayastor kubectl plugin** can be used to view and manage Replicated Storage (a.k.a Replicated Engine or Mayastor) resources such as nodes, pools and volumes. It is also used for operations such as scaling the replica count of volumes. @@ -144,7 +145,7 @@ kubectl mayastor scale volume Volume 0c08667c-8b59-4d11-9192-b54e27e0ce0f Scaled Successfully 🚀 ``` -### Retrieve Resource in any of the Output Formats (Table, JSON or YAML) +### Retrieve Resource in any of the Output Formats (Table, JSON, or YAML) > Table is the default output format. diff --git a/docs/main/user-guides/replicated-storage-user-guide/advanced-operations/monitoring.md b/docs/main/user-guides/replicated-storage-user-guide/advanced-operations/monitoring.md index 50044c30..a5aef1bc 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/advanced-operations/monitoring.md +++ b/docs/main/user-guides/replicated-storage-user-guide/advanced-operations/monitoring.md @@ -39,7 +39,7 @@ disk_pool_committed_size_bytes{node="worker-0", name="mayastor-disk-pool"} 96636 ## Stats Exporter Metrics -When [eventing](../additional-information/call-home.md) is activated, the stats exporter operates within the **obs-callhome-stats** container, located in the **callhome** pod. The statistics are made accessible through an HTTP endpoint at port `9090`, specifically using the `/stats` route. +When [eventing](../additional-information/eventing.md) is activated, the stats exporter operates within the **obs-callhome-stats** container, located in the **callhome** pod. The statistics are made accessible through an HTTP endpoint at port `9090`, specifically using the `/stats` route. ### Supported Stats Metrics @@ -101,4 +101,105 @@ Upon successful integration of the exporter with the Prometheus stack, the metri | kubelet_volume_stats_inodes_used | Gauge | Integer | The total number of inodes that have been utilized to store metadata. | +## Performance Monitoring Stack + +Earlier, the pool capacity/state stats were exported, and the exporter used to cache the metrics and return when Prometheus client queried. This was not ensuring the latest data retuns during the Prometheus poll cycle. + +In addition to the capacity and state metrics, the metrics exporter also exports performance statistics for pools, volumes, and replicas as Prometheus counters. The exporter does not pre-fetch or cache the metrics, it polls the IO engine inline with the Prometheus client polling cycle. + +:::important +Users are recommended to have Prometheus poll interval not less then 5 minutes. +::: + +The following sections describes the raw resource metrics counters. + +### DiskPool IoStat Counters + +| Metric Name | Metric Type | Labels/Tags | Metric Unit | Description | +| :--- | :--- | :--- | :--- | :--- | +|diskpool_num_read_ops | Gauge | `name`=, `node`= | Integer | Number of read operations | +|diskpool_bytes_read | Gauge | `name`=, `node`= | Integer | Total bytes read on the pool | +|diskpool_num_write_ops | Gauge | `name`=, `node`= | Integer | Number of write operations on the pool | +|diskpool_bytes_written | Gauge | `name`=, `node`= | Integer | Total bytes written on the pool | +|diskpool_read_latency_us | Gauge | `name`=, `node`= | Integer | Total read latency for all IOs on Pool in usec. | +|diskpool_write_latency_us | Gauge | `name`=, `node`= | Integer | Total write latency for all IOs on Pool in usec. | + +### Replica IoStat Counters + +| Metric Name | Metric Type | Labels/Tags | Metric Unit | Description | +| :--- | :--- | :--- | :--- | :--- | +|replica_num_read_ops | Gauge | `name`=, `pool_id`= `pv_name`=, `node`= | Integer | Number of read operations on replica | +|replica_bytes_read | Gauge | `name`=, `pv_name`=, `node`= | Integer | Total bytes read on the replica | +|replica_num_write_ops | Gauge | `name`=, `pv_name`=, `node`= | Integer | Number of write operations on the replica | +|replica_bytes_written | Gauge | `name`=, `pv_name`=, `node`= | Integer | Total bytes written on the Replica | +|replica_read_latency_us | Gauge | `name`=, `pv_name`=, `node`= | Integer | Total read latency for all IOs on replica in usec. | +|replica_write_latency_us | Gauge | `name`=, `pv_name`=, `node`= | Integer | Total write latency for all IOs on replica in usec. | + +### Target/Volume IoStat Counters + +| Metric Name | Metric Type | Labels/Tags | Metric Unit | Description | +| :--- | :--- | :--- | :--- | :--- | +|volume_num_read_ops | Gauge | `pv_name`= | Integer | Number of read operations through vol target | +|volume_bytes_read | Gauge | `pv_name`= | Integer | Total bytes read through vol target | +|volume_num_write_ops | Gauge | `pv_name`= | Integer | Number of write operations through vol target | +|volume_bytes_written | Gauge | `pv_name`= | Integer | Total bytes written through vol target | +|volume_read_latency_us | Gauge | `pv_name`= | Integer | Total read latency for all IOs through vol target in usec. | +|volume_write_latency_us | Gauge | `pv_name`= | Integer | Total write latency for all IOs through vol target in usec. | + +:::note +If you require IOPS, Latency, and Throughput in the dashboard, use the following consideration while creating dashboard json config. +::: + +## R/W IOPS Calculation + +`num_read_ops` and `num_write_ops` for all resources in stats response are available. + +``` +write_iops = num_write_ops (current poll) - num_write_ops (previous_poll) / poll period (in sec) +``` + +``` +read_iops = num_read_ops (current poll) - num_read_ops (previous_poll) / poll period (in sec) +``` + +## R/W Latency Calculation + +`write_latency` (sum of all IO's read_latency) and `read_latency` (sum of all IO and read_latency) are available. + +``` +read_latency_avg = read_latency (current poll) - read_latency (previous poll) / num_read_ops (current poll) - num_read_ops (previous_poll) +``` + +``` +write_latency_avg = write_latency (current poll) - write_latency (previous poll) / num_write_ops (current poll) - num_write_ops (previous_poll) +``` + +## R/W Throughput Calculation + +`bytes_read/written` (total bytes read/written for a bdev) are available. + +``` +read_throughput = bytes_read (current poll) - bytes_read (previous_poll) / poll period (in sec) +``` + +``` +write_throughput = bytes_written (current poll) - bytes_written (previous_poll) / poll period (in sec) +``` + +### Handling Counter Reset + +The performance stats are not persistent across IO engine restart, this means the counters will be reset upon IO engine restart. Users will receive lesser values for all the resources residing on that particular IO engine due to reset. So using the above logic would yield negative values. Hence, the counter current poll is less than the counter previous poll. In this case, do the following: + +``` +iops (r/w) = num_ops (r/w) / poll cycle +``` + +``` +latency_avg(r/w) = latency (r/w) / num_ops +``` + +``` +throughput (r/w) = bytes_read/written / poll_cycle (in secs) +``` + [Learn more](https://kubernetes.io/docs/concepts/storage/volume-health-monitoring/) \ No newline at end of file diff --git a/docs/main/user-guides/replicated-storage-user-guide/rs-configuration.md b/docs/main/user-guides/replicated-storage-user-guide/rs-configuration.md index 52ba3cdd..d5ec7a63 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/rs-configuration.md +++ b/docs/main/user-guides/replicated-storage-user-guide/rs-configuration.md @@ -107,7 +107,8 @@ pool-on-node-3 node-3-14944 Created Online 10724835328 0 1072 ## Create Replicated StorageClass\(s\) -Replicated Storage dynamically provisions PersistentVolumes \(PVs\) based on StorageClass definitions created by the user. Parameters of the definition are used to set the characteristics and behaviour of its associated PVs. For a detailed description of these parameters see [storage class parameter description](#storage-class-parameters). Most importantly StorageClass definition is used to control the level of data protection afforded to it \(that is, the number of synchronous data replicas which are maintained, for purposes of redundancy\). It is possible to create any number of StorageClass definitions, spanning all permitted parameter permutations. +Replicated Storage dynamically provisions PersistentVolumes \(PVs\) based on StorageClass definitions created by the user. Parameters of the definition are used to set the characteristics and behaviour of its associated PVs. See [storage class parameter description](#storage-class-parameters) for a detailed description of these parameters. +Most importantly StorageClass definition is used to control the level of data protection afforded to it (i.e.the number of synchronous data replicas which are maintained, for purposes of redundancy). It is possible to create any number of StorageClass definitions, spanning all permitted parameter permutations. We illustrate this quickstart guide with two examples of possible use cases; one which offers no data redundancy \(i.e. a single data replica\), and another having three data replicas. :::info diff --git a/docs/main/user-guides/replicated-storage-user-guide/rs-deployment.md b/docs/main/user-guides/replicated-storage-user-guide/rs-deployment.md index 680b026d..d0bb41c4 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/rs-deployment.md +++ b/docs/main/user-guides/replicated-storage-user-guide/rs-deployment.md @@ -118,7 +118,7 @@ ms-volume-claim Bound pvc-fe1a5a16-ef70-4775-9eac-2f9c67b3cd5b 1Gi ### Verify the Persistent Volume :::info -Substitute the example volume name with that shown under the "VOLUME" heading of the output returned by the preceding "get pvc" command, as executed in your own cluster +Substitute the example volume name with that shown under the "VOLUME" heading of the output returned by the preceding "get pvc" command, as executed in your own cluster. ::: **Command** @@ -139,7 +139,7 @@ pvc-fe1a5a16-ef70-4775-9eac-2f9c67b3cd5b 1Gi RWO Delete The status of the volume should be "online". :::info -To verify the status of volume [Replicated Storage Kubectl plugin](../replicated-storage-user-guide/advanced-operations/kubectl-plugin.md) is used. +To verify the status of volume [Kubectl plugin](../replicated-storage-user-guide/advanced-operations/kubectl-plugin.md) is used. ::: **Command** diff --git a/docs/main/user-guides/replicated-storage-user-guide/rs-installation.md b/docs/main/user-guides/replicated-storage-user-guide/rs-installation.md index 2f14a244..c7df9f70 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/rs-installation.md +++ b/docs/main/user-guides/replicated-storage-user-guide/rs-installation.md @@ -121,19 +121,19 @@ resources: ### RBAC Permission Requirements -* Kubernetes core v1 API-group resources: Pod, Event, Node, Namespace, ServiceAccount, PersistentVolume, PersistentVolumeClaim, ConfigMap, Secret, Service, Endpoint, Event. +* **Kubernetes core v1 API-group resources:** Pod, Event, Node, Namespace, ServiceAccount, PersistentVolume, PersistentVolumeClaim, ConfigMap, Secret, Service, Endpoint, and Event. -* Kubernetes batch API-group resources: CronJob, Job +* **Kubernetes batch API-group resources:** CronJob and Job -* Kubernetes apps API-group resources: Deployment, ReplicaSet, StatefulSet, DaemonSet +* **Kubernetes apps API-group resources:** Deployment, ReplicaSet, StatefulSet, and DaemonSet -* Kubernetes `storage.k8s.io` API-group resources: StorageClass, VolumeSnapshot, VolumeSnapshotContent, VolumeAttachment, CSI-Node +* **Kubernetes `storage.k8s.io` API-group resources:** StorageClass, VolumeSnapshot, VolumeSnapshotContent, VolumeAttachment, and CSI-Node -* Kubernetes `apiextensions.k8s.io` API-group resources: CustomResourceDefinition +* **Kubernetes `apiextensions.k8s.io` API-group resources:** CustomResourceDefinition -* Replicated Storage (a.k.a Replicated Engine or Mayastor) Custom Resources that is `openebs.io` API-group resources: DiskPool +* **Replicated Storage (a.k.a Replicated Engine or Mayastor) Custom Resources that is `openebs.io` API-group resources:** DiskPool -* Custom Resources from Helm chart dependencies of Jaeger that is helpful for debugging: +* **Custom Resources from Helm chart dependencies of Jaeger that is helpful for debugging:** - ConsoleLink Resource from `console.openshift.io` API group @@ -291,7 +291,7 @@ For installation instructions, see [here](../../quickstart-guide/installation.md ## Support -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). +If you encounter issues or have a question, file a [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). ## See Also diff --git a/docs/main/user-guides/upgrades.md b/docs/main/user-guides/upgrades.md index 843dae40..5c95c8d5 100644 --- a/docs/main/user-guides/upgrades.md +++ b/docs/main/user-guides/upgrades.md @@ -21,7 +21,7 @@ See the [migration documentation](../user-guides/data-migration/migration-overvi ## Overview -This upgrade flow would allow users to upgrade to the latest OpenEBS version 4.0.0 which is a unified installer for three Local Storages (a.k.a Local Engines) Local PV HostPath, Local PV LVM, Local PV ZFS, and one Replicated Storage (a.k.a Replicated Engine or Mayastor). +This upgrade flow allows the users to upgrade to the latest OpenEBS version 4.0.0 which is a unified installer for three Local Storages (a.k.a Local Engines) Local PV HostPath, Local PV LVM, Local PV ZFS, and one Replicated Storage (a.k.a Replicated Engine or Mayastor). As a part of upgrade to OpenEBS 4.0.0, the helm chart would install all four engines irrespective of the engine the user was using prior to the upgrade. :::info @@ -73,6 +73,10 @@ This section describes the Replicated Storage upgrade from OpenEBS Umbrella char 1. Start the helm upgrade process with the new chart, i.e. 4.0.0 by using the below command: +:::caution +It is highly recommended to disable the partial rebuild during the upgrade from specific versions of OpenEBS (3.7.0, 3.8.0, 3.9.0 and 3.10.0) to OpenEBS 4.0.0 to ensure data consistency during upgrade. Input the value `--set mayastor.agents.core.rebuild.partial.enabled=false` in the upgrade command. +::: + ``` helm upgrade openebs openebs/openebs -n openebs --reuse-values \ --set localpv-provisioner.release.version=4.0.0 \ @@ -96,7 +100,8 @@ helm upgrade openebs openebs/openebs -n openebs --reuse-values \ --set mayastor.csi.image.snapshotControllerTag=v6.3.3 \ --set mayastor.csi.image.registrarTag=v2.10.0 \ --set mayastor.crds.enabled=false \ - --set-json 'mayastor.loki-stack.promtail.initContainer=[]' + --set-json 'mayastor.loki-stack.promtail.initContainer=[]' \ + --set mayastor.agents.core.rebuild.partial.enabled=false ``` 2. Verify that the CRDs, Volumes, Snapshots and StoragePools are unaffected by the upgrade process. @@ -104,7 +109,7 @@ helm upgrade openebs openebs/openebs -n openebs --reuse-values \ 3. Start the Replicated Storage upgrade process by using the kubectl mayastor plugin v2.6.0. ``` -kubectl mayastor upgrade -n openebs +kubectl mayastor upgrade -n openebs --set 'mayastor.agents.core.rebuild.partial.enabled=false' ``` - This deploys an upgrade process of K8s resource type Job. @@ -126,8 +131,15 @@ openebs-upgrade-v2-6-0-s58xl 0/1 Completed 0 7m 4. Once the upgrade process completes, all the volumes and pools should be online. +5. If you have disabled the partial rebuild during the upgrade, re-enable it by adding the value `--set mayastor.agents.core.rebuild.partial.enabled=true` in the upgrade command. + +``` +helm upgrade openebs openebs/openebs -n openebs --reuse-values \ + --set mayastor.agents.core.rebuild.partial.enabled=true +``` + ## See Also - [Release Notes](../releases.md) -- [Troubleshooting](../troubleshooting/troubleshooting-local-engine.md) +- [Troubleshooting](../troubleshooting/troubleshooting-local-storage.md) - [Join our Community](../community.md) diff --git a/docs/sidebars.js b/docs/sidebars.js index c4e3fd7e..d5512db1 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -591,6 +591,14 @@ module.exports = { customProps: { icon: "HelpCircle" }, + }, + { + type: "doc", + id: "glossary", + label: "Glossary", + customProps: { + icon: "HelpCircle" + }, } ] } \ No newline at end of file diff --git a/docs/src/scss/_tables.scss b/docs/src/scss/_tables.scss index babbd264..c5f10f90 100644 --- a/docs/src/scss/_tables.scss +++ b/docs/src/scss/_tables.scss @@ -15,6 +15,7 @@ table { border-right: $table-border-width solid $table-border-color; border-left: $table-border-width solid $table-border-color; border-bottom: $table-border-width solid $table-border-color; + border-top: $table-border-width solid $table-border-color; vertical-align: middle; // &:last-child { // border: none;