Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: added new features in the monitoring document & modified the docs that had minor changes #416

Merged
merged 11 commits into from
Apr 15, 2024
6 changes: 3 additions & 3 deletions docs/main/community.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: You can reach out to OpenEBS contributors and maintainers through S

## GitHub

Raise a [GitHub issue](https://github.com/openebs/openebs/issues/new)
Raise a [GitHub issue](https://github.com/openebs/openebs/issues/new).

## Slack

Expand All @@ -26,8 +26,8 @@ Community blogs are available at [https://openebs.io/blog/](https://openebs.io/b

Join our OpenEBS CNCF Mailing lists

- For OpenEBS project updates, subscribe to [OpenEBS Announcements](https://lists.cncf.io/g/cncf-openebs-announcements)
- For interacting with other OpenEBS users, subscribe to [OpenEBS Users](https://lists.cncf.io/g/cncf-openebs-users)
- For OpenEBS project updates, subscribe to [OpenEBS Announcements](https://lists.cncf.io/g/cncf-openebs-announcements).
- For interacting with other OpenEBS users, subscribe to [OpenEBS Users](https://lists.cncf.io/g/cncf-openebs-users).

## Community Meetings

Expand Down
2 changes: 1 addition & 1 deletion docs/main/concepts/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The data engines are at the core of OpenEBS and are responsible for performing t

The data engines are responsible for:
- Aggregating the capacity available in the block devices allocated to them and then carving out volumes for applications.
- Provide standard system or network transport interfaces(NVMe) for connecting to local or remote volumes
- Provide standard system or network transport interfaces (NVMe) for connecting to local or remote volumes
- Provide volume services like - synchronous replication, compression, encryption, maintaining snapshots, access to the incremental or full snapshots of data and so forth
- Provide strong consistency while persisting the data to the underlying storage devices

Expand Down
2 changes: 1 addition & 1 deletion docs/main/concepts/data-engines/data-engines.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ An important aspect of the OpenEBS Data Layer is that each volume replica is a f

### Use-cases for OpenEBS Replicated Storage

- When you need high performance storage using NVMe SSDs the cluster is capable of NVMeoF.
- When you need high performance storage using NVMe SSDs the cluster is capable of NVMe-oF.
- When you need replication or availability features to protect against node failures.
- Replicated Storage is designed for the next-gen compute and storage technology and is under active development.

Expand Down
4 changes: 2 additions & 2 deletions docs/main/concepts/data-engines/local-storage.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
id: localstorage
title: OpenEBS Local Storage
title: Local Storage
keywords:
- Local Storage
- OpenEBS Local Storage
Expand Down Expand Up @@ -33,7 +33,7 @@ OpenEBS helps users to take local volumes into production by providing features

## Quickstart Guides

OpenEBS provides Local Volume that can be used to provide locally mounted storage to Kubernetes Stateful workloads. Refer to the [Quickstart Guide](../../quickstart-guide/) for more information.
OpenEBS provides Local Volume that can be used to provide locally mounted storage to Kubernetes Stateful workloads. Refer to the [Quickstart Guide](../../quickstart-guide/installation.md) for more information.

## When to use OpenEBS Local Storage?

Expand Down
3 changes: 2 additions & 1 deletion docs/main/concepts/data-engines/replicated-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ id: replicated-storage
title: Replicated Storage
keywords:
- Replicated Storage
- OpenEBS Replicated Storage
description: In this document you will learn about Replicated Storage and its design goals.
---

Expand Down Expand Up @@ -44,6 +45,6 @@ Join the vibrant [OpenEBS community on Kubernetes Slack](https://kubernetes.slac
## See Also

- [OpenEBS Architecture](../architecture.md)
- [Replicated Storage Prerequisites](../../user-guides/replicated-storage-user-guide/prerequisites.md)
- [Replicated Storage Prerequisites](../../user-guides/replicated-storage-user-guide/rs-installation.md#prerequisites)
- [Installation](../../quickstart-guide/installation.md)
- [Replicated Storage User Guide](../../user-guides/replicated-storage-user-guide/rs-installation.md)
44 changes: 22 additions & 22 deletions docs/main/faqs/faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ To determine exactly where your data is physically stored, you can run the follo

* Run `kubectl get pvc` to fetch the volume name. The volume name looks like: *pvc-ee171da3-07d5-11e8-a5be-42010a8001be*.

* For each volume, you will notice one I/O controller pod and one or more replicas (as per the storage class configuration). You can use the volume ID (ee171da3-07d5-11e8-a5be-42010a8001be) to view information about the volume and replicas using the Replicated Storage [kubectl plugin](../user-guides/replicated-storage-user-guide/advanced-operations/kubectl-plugin.md)
* For each volume, you will notice one I/O controller pod and one or more replicas (as per the storage class configuration). You can use the volume ID (ee171da3-07d5-11e8-a5be-42010a8001be) to view information about the volume and replicas using the [kubectl plugin](../user-guides/replicated-storage-user-guide/advanced-operations/kubectl-plugin.md)

[Go to top](#top)

Expand All @@ -34,7 +34,7 @@ One of the major differences of OpenEBS versus other similar approaches is that

### How do you get started and what is the typical trial deployment? {#get-started}

To get started, you can follow the steps in the [quickstart guide](../quickstart-guide/installation.md)
To get started, you can follow the steps in the [quickstart guide](../quickstart-guide/installation.md).

[Go to top](#top)

Expand Down Expand Up @@ -97,7 +97,7 @@ env:
```
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.

Once we have labeled the node, we can install the lvm driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LVM-LocalPV CSI driver daemon sets (openebs-lvm-node).
Once we have labeled the node, we can install the lvm driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the Local PV LVM CSI driver daemon sets (openebs-lvm-node).


```sh
Expand All @@ -110,7 +110,7 @@ openebs-lvm-node-gssh8 2/2 Running 0 5h28m
openebs-lvm-node-twmx8 2/2 Running 0 5h28m
```

We can verify that key has been registered successfully with the LVM LocalPV CSI Driver by checking the CSI node object yaml :-
We can verify that key has been registered successfully with the Local PV LVM CSI Driver by checking the CSI node object yaml:

```yaml
$ kubectl get csinodes pawan-node-1 -oyaml
Expand All @@ -136,7 +136,7 @@ spec:
- openebs.io/rack
```

We can see that "openebs.io/rack" is listed as topology key. Now we can create a storageclass with the topology key created :
We can see that "openebs.io/rack" is listed as topology key. Now we can create a storageclass with the topology key created:

```yaml
apiVersion: storage.k8s.io/v1
Expand Down Expand Up @@ -237,7 +237,7 @@ spec:

To add custom topology key:
* Label the nodes with the required key and value.
* Set env variables in the ZFS driver daemonset yaml(openebs-zfs-node), if already deployed, you can edit the daemonSet directly. By default the env is set to `All` which will take the node label keys as allowed topologies.
* Set env variables in the ZFS driver daemonset yaml (openebs-zfs-node), if already deployed, you can edit the daemonSet directly. By default the env is set to `All` which will take the node label keys as allowed topologies.
* "openebs.io/nodename" and "openebs.io/nodeid" are added as default topology key.
* Create storageclass with above specific labels keys.

Expand Down Expand Up @@ -268,7 +268,7 @@ env:
```
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.

Once we have labeled the node, we can install the zfs driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the ZFS-LocalPV CSI driver daemon sets (openebs-zfs-node).
Once we have labeled the node, we can install the zfs driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LocalPV ZFS CSI driver daemon sets (openebs-zfs-node).

```sh
$ kubectl get pods -n kube-system -l role=openebs-zfs
Expand Down Expand Up @@ -346,7 +346,7 @@ The driver uses below logic to roundoff the capacity:

allocated = ((size + 1Gi - 1) / Gi) * Gi

For example if the PVC is requesting 4G storage space :-
For example if the PVC is requesting 4G storage space:

```
kind: PersistentVolumeClaim
Expand All @@ -368,7 +368,7 @@ Then driver will find the nearest size in Gi, the size allocated will be ((4G +

allocated = ((size + 1Mi - 1) / Mi) * Mi

For example if the PVC is requesting 1G (1000 * 1000 * 1000) storage space which is less than 1Gi (1024 * 1024 * 1024):-
For example if the PVC is requesting 1G (1000 * 1000 * 1000) storage space which is less than 1Gi (1024 * 1024 * 1024):

```
kind: PersistentVolumeClaim
Expand All @@ -386,20 +386,20 @@ spec:

Then driver will find the nearest size in Mi, the size allocated will be ((1G + 1Mi - 1) / Mi) * Mi, which will be 954Mi.

PVC size as zero in not a valid capacity. The minimum allocatable size for the ZFS-LocalPV driver is 1Mi, which means that if we are requesting 1 byte of storage space then 1Mi will be allocated for the volume.
PVC size as zero in not a valid capacity. The minimum allocatable size for the Local PV ZFS driver is 1Mi, which means that if we are requesting 1 byte of storage space then 1Mi will be allocated for the volume.

[Go to top](#top)

### How to migrate PVs to the new node in case old node is not accessible?

The Local PV ZFS driver will set affinity on the PV to make the volume stick to the node so that pod gets scheduled to that node only where the volume is present. Now, the problem here is, when that node is not accesible due to some reason and we move the disks to a new node and import the pool there, the pods will not be scheduled to this node as k8s scheduler will be looking for that node only to schedule the pod.

From release 1.7.0 of the Local PV ZFS, the driver has the ability to use the user defined affinity for creating the PV. While deploying the ZFS-LocalPV driver, first we should label all the nodes using the key `openebs.io/nodeid` with some unique value.
From release 1.7.0 of the Local PV ZFS, the driver has the ability to use the user defined affinity for creating the PV. While deploying the Local PV ZFS driver, first we should label all the nodes using the key `openebs.io/nodeid` with some unique value.
```
$ kubectl label node node-1 openebs.io/nodeid=custom-value-1
```

In the above command, we have labelled the node `node-1` using the key `openebs.io/nodeid` and the value we have used here is `custom-value-1`. You can pick your own value, just make sure that the value is unique for all the nodes. We have to label all the nodes in the cluster with the unique value. For example, `node-2` and `node-3` can be labelled as below:
In the above command, we have labeled the node `node-1` using the key `openebs.io/nodeid` and the value we have used here is `custom-value-1`. You can pick your own value, just make sure that the value is unique for all the nodes. We have to label all the nodes in the cluster with the unique value. For example, `node-2` and `node-3` can be labeled as below:

```
$ kubectl label node node-2 openebs.io/nodeid=custom-value-2
Expand All @@ -408,13 +408,13 @@ $ kubectl label node node-3 openebs.io/nodeid=custom-value-3

Now, the Driver will use `openebs.io/nodeid` as the key and the corresponding value to set the affinity on the PV and k8s scheduler will consider this affinity label while scheduling the pods.

Now, when a node is not accesible, we need to do below steps
When a node is not accesible, follow the steps below:

1. remove the old node from the cluster or we can just remove the above node label from the node which we want to remove.
2. add a new node in the cluster
3. move the disks to this new node
4. import the zfs pools on the new nodes
5. label the new node with same key and value. For example, if we have removed the node-3 from the cluster and added node-4 as new node, we have to label the node `node-4` and set the value to `custom-value-3` as shown below:
1. Remove the old node from the cluster or we can just remove the above node label from the node which we want to remove.
2. Add a new node in the cluster
3. Move the disks to this new node
4. Import the zfs pools on the new nodes
5. Label the new node with same key and value. For example, if we have removed the node-3 from the cluster and added node-4 as new node, we have to label the node `node-4` and set the value to `custom-value-3` as shown below:

```
$ kubectl label node node-4 openebs.io/nodeid=custom-value-3
Expand All @@ -424,9 +424,9 @@ Once the above steps are done, the pod should be able to run on this new node wi

[Go to top](#top)

### How is data protected in Replicated Storage (a.k.a Replicated Engine or Mayastor)? What happens when a host, client workload, or a data center fails?
### How is data protected in Replicated Storage? What happens when a host, client workload, or a data center fails?

The OpenEBS Replicated Storage ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure.
The OpenEBS Replicated Storage (a.k.a Replicated Engine or Mayastor) ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure.
Faulted replicas are automatically rebuilt in the background without IO disruption to maintain the replication factor.

[Go to top](#top)
Expand Down Expand Up @@ -508,9 +508,9 @@ Since the replicas \(data copies\) of replicated volumes are held entirely withi

The size of a Replicated Storage pool is fixed at the time of creation and is immutable. A single pool may have only one block device as a member. These constraints may be removed in later versions.

### How can I ensure that replicas aren't scheduled onto the same node? How about onto nodes in the same rack/availability zone?
### How can I ensure that replicas are not scheduled onto the same node? How about onto nodes in the same rack/availability zone?

The replica placement logic of Replicated Storage's control plane doesn't permit replicas of the same volume to be placed onto the same node, even if it were to be within different Disk Pools. For example, if a volume with replication factor 3 is to be provisioned, then there must be three healthy Disk Pools available, each with sufficient free capacity and each located on its own replicated node. Further enhancements to topology awareness are under consideration by the maintainers.
The replica placement logic of Replicated Storage's control plane does not permit replicas of the same volume to be placed onto the same node, even if it were to be within different Disk Pools. For example, if a volume with replication factor 3 is to be provisioned, then there must be three healthy Disk Pools available, each with sufficient free capacity and each located on its own replicated node. Further enhancements to topology awareness are under consideration by the maintainers.

[Go to top](#top)

Expand Down
41 changes: 41 additions & 0 deletions docs/main/glossary.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
id: glossary
title: Glossary of Terms
keywords:
- Community
- OpenEBS community
description: This section lists the abbreviations used thorughout the OpenEBS documentation
---

| Abbreviations | Definition |
| :--- | :--- |
| AKS | Azure Kubernetes Service |
| CLI | Command Line Interface |
| CNCF | Cloud Native Computing Foundation |
| CNS | Container Native Storage |
| COS | Container Orchestration Systems |
| COW | Copy-On-Write |
| CR | Custom Resource |
| CRDs | Custom Resource Definitions |
| CSI | Container Storage Interface |
| EKS | Elastic Kubernetes Service |
| FIO | Flexible IO Tester |
| FSB | File System Backup |
| GCS | Google Cloud Storage |
| GKE | Google Kubernetes Engine |
| HA | High Availability |
| LVM | Logical Volume Management |
| NATS | Neural Autonomic Transport System |
| NFS | Network File System |
| NVMe | Non-Volatile Memory Express |
| NVMe-oF | Non-Volatile Memory Express over Fabrics |
| OpenEBS | Open Elastic Block Store |
| PV | Persistent Volume |
| PVC | Persistent Volume Claim |
| RBAC | Role-Based Access Control |
| SPDK | Storage Performance Development Kit |
| SRE | Site Reliability Engineering |
| TCP | Transmission Control Protocol |
| VG | Volume Group |
| YAML | Yet Another Markup Language |
| ZFS | Zettabyte File System |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good

2 changes: 1 addition & 1 deletion docs/main/introduction-to-openebs/features.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ OpenEBS Features, like any storage solution, can be broadly classified into the

<TwoColumn left="1fr" right="200px">
<p>
The backup and restore of OpenEBS volumes works with Kubernetes backup and restore solutions such as Velero (formerly Heptio Ark) via open source OpenEBS Velero-plugins. Data backup to object storage targets such as AWS S3, GCP Object Storage or MinIO are frequently deployed using the OpenEBS incremental snapshot capability. This storage level snapshot and backup saves a significant amount of bandwidth and storage space as only incremental data is used for backup.
The backup and restore of OpenEBS volumes works with Kubernetes backup and restore solutions such as Velero via open source OpenEBS Velero-plugins. Data backup to object storage targets such as AWS S3, GCP Object Storage or MinIO are frequently deployed using the OpenEBS incremental snapshot capability. This storage level snapshot and backup saves a significant amount of bandwidth and storage space as only incremental data is used for backup.
</p>

![Backup and Restore Icon](../assets/f-backup.svg)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ The [OpenEBS Adoption stories](https://github.com/openebs/openebs/blob/master/AD

- OpenEBS provides consistency across all Kubernetes distributions - On-premise and Cloud.
- OpenEBS with Kubernetes increases Developer and Platform SRE Productivity.
- OpenEBS is Easy to use compared to other solutions, for eg trivial to install & enabling entirely dynamic provisioning.
- OpenEBS scores in its ease of use over other solutions. It is trivial to setup, install and configure.
- OpenEBS has Excellent Community Support.
- OpenEBS is completely Open Source and Free.

Expand Down
4 changes: 2 additions & 2 deletions docs/main/quickstart-guide/deploy-a-test-application.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ description: This section will help you to deploy a test application.
---

:::info
- See [Local PV LVM User Guide](../user-guides/local-storage-user-guide/lvm-localpv.md) to deploy Local PV LVM.
- See [Local PV ZFS User Guide](../user-guides/local-storage-user-guide/zfs-localpv.md) to deploy Local PV ZFS.
- See [Local PV LVM Deployment](../user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md) to deploy Local PV LVM.
- See [Local PV ZFS Deployment](../user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md) to deploy Local PV ZFS.
- See [Replicated Storage Deployment](../user-guides/replicated-storage-user-guide/rs-deployment.md) to deploy Replicated Storage (a.k.a Replicated Engine or Mayastor).
:::

Expand Down
Loading
Loading