copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2019-10-03 |
kubernetes, iks, local persistent storage |
containers |
{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:download: .download} {:preview: .preview}
{: #getting-started-with-portworx}
With your Portworx cluster all set up, hit the ground running by adding highly available local persistent storage to your containerized apps that run in Kubernetes clusters. {: shortdesc}
{: #px-verify-installation}
Verify that your Portworx installation completed successfully and that all your local disks were recognized and added to the Portworx storage layer. {: shortdesc}
Before you begin:
- Make sure that you installed the latest version of the {{site.data.keyword.cloud_notm}} CLI and the {{site.data.keyword.containerlong_notm}} CLI plug-in.
- Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
To verify your installation:
-
List the Portworx pods in the
kube-system
namespace. The installation is successful when you see one or moreportworx
,stork
, andstork-scheduler
pods. The number of pods equals the number of worker nodes that are included in your Portworx cluster. All pods must be in aRunning
state.kubectl get pods -n kube-system | grep 'portworx\|stork'
{: pre}
Example output:
portworx-594rw 1/1 Running 0 20h portworx-rn6wk 1/1 Running 0 20h portworx-rx9vf 1/1 Running 0 20h stork-6b99cf5579-5q6x4 1/1 Running 0 20h stork-6b99cf5579-slqlr 1/1 Running 0 20h stork-6b99cf5579-vz9j4 1/1 Running 0 20h stork-scheduler-7dd8799cc-bl75b 1/1 Running 0 20h stork-scheduler-7dd8799cc-j4rc9 1/1 Running 0 20h stork-scheduler-7dd8799cc-knjwt 1/1 Running 0 20h
{: screen}
-
Log in to one of your
portworx
pods and list the status of your Portworx cluster.kubectl exec <portworx_pod> -it -n kube-system -- /opt/pwx/bin/pxctl status
{: pre}
Example output:
Status: PX is operational License: Trial (expires in 30 days) Node ID: 10.176.48.67 IP: 10.176.48.67 Local Storage Pool: 1 pool POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION 0 LOW raid0 20 GiB 3.0 GiB Online dal10 us-south Local Storage Devices: 1 device Device Path Media Type Size Last-Scan 0:1 /dev/mapper/3600a09803830445455244c4a38754c66 STORAGE_MEDIUM_MAGNETIC 20 GiB 17 Sep 18 20:36 UTC total - 20 GiB Cluster Summary Cluster ID: mycluster Cluster UUID: a0d287ba-be82-4aac-b81c-7e22ac49faf5 Scheduler: kubernetes Nodes: 2 node(s) with storage (2 online), 1 node(s) without storage (1 online) IP ID StorageNode Used Capacity Status StorageStatus Version Kernel OS 10.184.58.11 10.184.58.11 Yes 3.0 GiB 20 GiB Online Up 1.5.0.0-bc1c580 4.4.0-133-generic Ubuntu 16.04.5 LTS 10.176.48.67 10.176.48.67 Yes 3.0 GiB 20 GiB Online Up (This node) 1.5.0.0-bc1c580 4.4.0-133-generic Ubuntu 16.04.5 LTS 10.176.48.83 10.176.48.83 No 0 B 0 B Online No Storage 1.5.0.0-bc1c580 4.4.0-133-generic Ubuntu 16.04.5 LTS Global Storage Pool Total Used : 6.0 GiB Total Capacity : 40 GiB
{: screen}
-
Verify that all worker nodes that you wanted to include in your Portworx storage layer are included by reviewing the StorageNode column in the Cluster Summary section of your CLI output. Worker nodes that are included in the storage layer are displayed with
Yes
in the StorageNode column.Because Portworx runs as a daemon set in your cluster, new worker nodes that you add to your cluster are automatically inspected for raw block storage and added to the Portworx data layer. {: note}
-
Verify that each storage node is listed with the correct amount of raw block storage by reviewing the Capacity column in the Cluster Summary section of your CLI output.
-
Review the Portworx I/O classification that was assigned to the disks that are part of the Portworx cluster. During the setup of your Portworx cluster, every disk is inspected to determine the performance profile of the device. The profile classification depends on how fast the network is that your worker node is connected to and the type of storage device that you have. Disks of SDS worker nodes are classified as
high
. If you manually attach disks to a virtual worker node, then these disks are classified aslow
due to the lower network speed that comes with virtual worker nodes.kubectl exec -it <portworx_pod> -n kube-system -- /opt/pwx/bin/pxctl cluster provision-status
{: pre}
Example output:
NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED RESERVEFACTOR ZONE REGION RACK 10.184.58.11 Up 0 Online LOW 20 GiB 17 GiB 3.0 GiB 0 B 0 dal12 us-south default 10.176.48.67 Up 0 Online LOW 20 GiB 17 GiB 3.0 GiB 0 B 0 dal10 us-south default 10.176.48.83 Up 0 Online HIGH 3.5 TiB 3.5 TiB 10 GiB 0 B 0 dal10 us-south default
{: screen}
{: #px-add-storage}
Start creating Portworx volumes by using Kubernetes dynamic provisioning. {: shortdesc}
-
List available storage classes in your cluster and check whether you can use an existing Portworx storage class that was set up during the Portworx installation. The pre-defined storage classes are optimized for database usage and to share data across pods.
kubectl get storageclasses | grep portworx
{: pre}
To view the details of a storage class, run
kubectl describe storageclass <storageclass_name>
. {: tip} -
If you don't want to use an existing storage class, create a customized storage class. For a full list of supported options that you can specify in your storage class, see Using Dynamic Provisioning .
-
Create a configuration file for your storage class.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storageclass_name> provisioner: kubernetes.io/portworx-volume parameters: repl: "<replication_factor>" secure: "<true_or_false>" priority_io: "<io_priority>" shared: "<true_or_false>"
{: codeblock}
Understanding the YAML file components -
Create the storage class.
kubectl apply -f storageclass.yaml
{: pre}
-
Verify that the storage class is created.
kubectl get storageclasses
{: pre}
-
-
Create a persistent volume claim (PVC).
-
Create a configuration file for your PVC.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc spec: accessModes: - <access_mode> resources: requests: storage: <size> storageClassName: portworx-shared-sc
{: codeblock}
Understanding the YAML file components Understanding the YAML file components metadata.name
Enter a name for your PVC, such as mypvc
.spec.accessModes
Enter the [Kubernetes access mode ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) that you want to use. resources.requests.storage
Enter the amount of storage in gigabytes that you want to assign from your Portworx cluster. For example, to assign 2 gigabytes from your Portworx cluster, enter `2Gi`. The amount of storage that you can specify is limited by the amount of storage that is available in your Portworx cluster. If you specified a replication factor in your storage class that is higher than 1, then the amount of storage that you specify in your PVC is reserved on multiple worker nodes. spec.storageClassName
Enter the name of the storage class that you chose or created earlier and that you want to use to provision your PV. The example YAML file uses the portworx-shared-sc
storage class. -
Create your PVC.
kubectl apply -f pvc.yaml
{: pre}
-
Verify that your PVC is created and bound to a persistent volume (PV). This process might take a few minutes.
kubectl get pvc
{: pre}
-
{: #px-mount-pvc}
To access the storage from your app, you must mount the PVC to your app. {: shortdesc}
-
Create a configuration file for a deployment that mounts the PVC.
For tips on how to deploy a stateful set with Portworx, see StatefulSets . The Portworx documentation also includes examples for how to deploy Cassandra , Kafka , ElasticSearch with Kibana , and WordPress with MySQL . {: tip}
apiVersion: apps/v1 kind: Deployment metadata: name: <deployment_name> labels: app: <deployment_label> spec: selector: matchLabels: app: <app_name> template: metadata: labels: app: <app_name> spec: schedulerName: stork containers: - image: <image_name> name: <container_name> securityContext: fsGroup: <group_ID> volumeMounts: - name: <volume_name> mountPath: /<file_path> volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>
{: codeblock}
Understanding the YAML file components Understanding the YAML file components metadata.labels.app
A label for the deployment. spec.selector.matchLabels.app
spec.template.metadata.labels.app
A label for your app. template.metadata.labels.app
A label for the deployment. spec.schedulerName
Use [Stork ![External link icon](../icons/launch-glyph.svg "External link icon")](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/stork/) as the scheduler for your Portworx cluster. With Stork, you can co-locate pods with their data, provides seamless migration of pods in case of storage errors and makes it easier to create and restore snapshots of Portworx volumes. spec.containers.image
The name of the image that you want to use. To list available images in your {{site.data.keyword.registryshort_notm}} account, run ibmcloud cr image-list
.spec.containers.name
The name of the container that you want to deploy to your cluster. spec.containers.securityContext.fsGroup
Optional: To access your storage with a non-root user, specify the [security context ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for your pod and define the set of users that you want to grant access in the `fsGroup` section on your deployment YAML. For more information, see [Accessing Portworx volumes with a non-root user ![External link icon](../icons/launch-glyph.svg "External link icon")](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-pvcs/access-via-non-root-users/). spec.containers.volumeMounts.mountPath
The absolute path of the directory to where the volume is mounted inside the container. If you want to share a volume between different apps, you can specify [volume sub paths ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath) for each of your apps. spec.containers.volumeMounts.name
The name of the volume to mount to your pod. volumes.name
The name of the volume to mount to your pod. Typically this name is the same as volumeMounts/name
.volumes.persistentVolumeClaim.claimName
The name of the PVC that binds the PV that you want to use. -
Create your deployment.
kubectl apply -f deployment.yaml
{: pre}
-
Verify that the PV is successfully mounted to your app.
kubectl describe deployment <deployment_name>
{: pre}
The mount point is in the Volume Mounts field and the volume is in the Volumes field.
Volume Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-tqp61 (ro) /volumemount from myvol (rw) ... Volumes: myvol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mypvc ReadOnly: false
{: screen}
-
Verify that you can write data to your Portworx cluster.
-
Log in to the pod that mounts your PV.
kubectl exec <pod_name> -it bash
{: pre}
-
Navigate to your volume mount path that you defined in your app deployment.
-
Create a text file.
echo "This is a test" > test.txt
{: pre}
-
Read the file that you created.
cat test.txt
{: pre}
-