Skip to content

Latest commit

 

History

History
386 lines (341 loc) · 19.6 KB

cs_storage_portworx_gs.md

File metadata and controls

386 lines (341 loc) · 19.6 KB
copyright lastupdated keywords subcollection
years
2014, 2019
2019-10-03
kubernetes, iks, local persistent storage
containers

{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:download: .download} {:preview: .preview}

Getting started with Portworx

{: #getting-started-with-portworx}

With your Portworx cluster all set up, hit the ground running by adding highly available local persistent storage to your containerized apps that run in Kubernetes clusters. {: shortdesc}

Verifying your Portworx installation

{: #px-verify-installation}

Verify that your Portworx installation completed successfully and that all your local disks were recognized and added to the Portworx storage layer. {: shortdesc}

Before you begin:

To verify your installation:

  1. List the Portworx pods in the kube-system namespace. The installation is successful when you see one or more portworx, stork, and stork-scheduler pods. The number of pods equals the number of worker nodes that are included in your Portworx cluster. All pods must be in a Running state.

    kubectl get pods -n kube-system | grep 'portworx\|stork'
    

    {: pre}

    Example output:

    portworx-594rw                          1/1       Running     0          20h
    portworx-rn6wk                          1/1       Running     0          20h
    portworx-rx9vf                          1/1       Running     0          20h
    stork-6b99cf5579-5q6x4                  1/1       Running     0          20h
    stork-6b99cf5579-slqlr                  1/1       Running     0          20h
    stork-6b99cf5579-vz9j4                  1/1       Running     0          20h
    stork-scheduler-7dd8799cc-bl75b         1/1       Running     0          20h
    stork-scheduler-7dd8799cc-j4rc9         1/1       Running     0          20h
    stork-scheduler-7dd8799cc-knjwt         1/1       Running     0          20h
    

    {: screen}

  2. Log in to one of your portworx pods and list the status of your Portworx cluster.

    kubectl exec <portworx_pod> -it -n kube-system -- /opt/pwx/bin/pxctl status
    

    {: pre}

    Example output:

    Status: PX is operational
    License: Trial (expires in 30 days)
    Node ID: 10.176.48.67
    IP: 10.176.48.67
        Local Storage Pool: 1 pool
        POOL	IO_PRIORITY	RAID_LEVEL	USABLE	USED	STATUS	ZONE	REGION
       	0	LOW		raid0		20 GiB	3.0 GiB	Online	dal10	us-south
     Local Storage Devices: 1 device
     Device	Path						Media Type		Size		Last-Scan
      	0:1	/dev/mapper/3600a09803830445455244c4a38754c66	STORAGE_MEDIUM_MAGNETIC	20 GiB		17 Sep 18 20:36 UTC
       	total							-			20 GiB
     Cluster Summary
        Cluster ID: mycluster
            Cluster UUID: a0d287ba-be82-4aac-b81c-7e22ac49faf5
        Scheduler: kubernetes
        Nodes: 2 node(s) with storage (2 online), 1 node(s) without storage (1 online)
           IP		ID		StorageNode	Used	Capacity	Status	StorageStatus	Version		Kernel			OS
           10.184.58.11	10.184.58.11	Yes		3.0 GiB	20 GiB		Online	Up		1.5.0.0-bc1c580	4.4.0-133-generic	Ubuntu 16.04.5 LTS
           10.176.48.67	10.176.48.67	Yes		3.0 GiB	20 GiB		Online	Up (This node)	1.5.0.0-bc1c580	4.4.0-133-generic	Ubuntu 16.04.5 LTS
           10.176.48.83	10.176.48.83	No		0 B	0 B		Online	No Storage	1.5.0.0-bc1c580	4.4.0-133-generic	Ubuntu 16.04.5 LTS
       Global Storage Pool
           Total Used    	:  6.0 GiB
           Total Capacity	:  40 GiB
    

    {: screen}

  3. Verify that all worker nodes that you wanted to include in your Portworx storage layer are included by reviewing the StorageNode column in the Cluster Summary section of your CLI output. Worker nodes that are included in the storage layer are displayed with Yes in the StorageNode column.

    Because Portworx runs as a daemon set in your cluster, new worker nodes that you add to your cluster are automatically inspected for raw block storage and added to the Portworx data layer. {: note}

  4. Verify that each storage node is listed with the correct amount of raw block storage by reviewing the Capacity column in the Cluster Summary section of your CLI output.

  5. Review the Portworx I/O classification that was assigned to the disks that are part of the Portworx cluster. During the setup of your Portworx cluster, every disk is inspected to determine the performance profile of the device. The profile classification depends on how fast the network is that your worker node is connected to and the type of storage device that you have. Disks of SDS worker nodes are classified as high. If you manually attach disks to a virtual worker node, then these disks are classified as low due to the lower network speed that comes with virtual worker nodes.

    kubectl exec -it <portworx_pod> -n kube-system -- /opt/pwx/bin/pxctl cluster provision-status
    

    {: pre}

    Example output:

    NODE		NODE STATUS	POOL	POOL STATUS	IO_PRIORITY	SIZE	AVAILABLE	USED	PROVISIONED	RESERVEFACTOR	ZONE	REGION		RACK
    10.184.58.11	Up		0	Online		LOW		20 GiB	17 GiB		3.0 GiB	0 B		0		dal12	us-south	default
    10.176.48.67	Up		0	Online		LOW		20 GiB	17 GiB		3.0 GiB	0 B		0		dal10	us-south	default
    10.176.48.83	Up		0	Online		HIGH		3.5 TiB	3.5 TiB		10 GiB	0 B		0		dal10	us-south	default
    

    {: screen}

Creating a Portworx volume

{: #px-add-storage}

Start creating Portworx volumes by using Kubernetes dynamic provisioning. {: shortdesc}

  1. List available storage classes in your cluster and check whether you can use an existing Portworx storage class that was set up during the Portworx installation. The pre-defined storage classes are optimized for database usage and to share data across pods.

    kubectl get storageclasses | grep portworx
    

    {: pre}

    To view the details of a storage class, run kubectl describe storageclass <storageclass_name>. {: tip}

  2. If you don't want to use an existing storage class, create a customized storage class. For a full list of supported options that you can specify in your storage class, see Using Dynamic Provisioning External link icon.

    1. Create a configuration file for your storage class.

      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata:
        name: <storageclass_name>
      provisioner: kubernetes.io/portworx-volume
      parameters:
        repl: "<replication_factor>"
        secure: "<true_or_false>"
        priority_io: "<io_priority>"
        shared: "<true_or_false>"
      

      {: codeblock}

      Understanding the YAML file components
      Idea icon Understanding the YAML file components
      metadata.name Enter a name for your storage class.
      parameters.repl Enter the number of replicas for your data that you want to store across different worker nodes. Allowed numbers are `1`,`2`, or `3`. For example, if you enter `3`, then your data is replicated across three different worker nodes in your Portworx cluster. To store your data highly available, use a multizone cluster and replicate your data across three worker nodes in different zones. Note: You must have enough worker nodes to fulfill your replication requirement. For example, if you have two worker nodes, but you specify three replicas, then the creation of the PVC with this storage class fails.
      parameters.secure Specify whether you want to encrypt the data in your volume with {{site.data.keyword.keymanagementservicelong_notm}}. Choose between the following options:
      • true: Enter true to enable encryption for your Portworx volumes. To encrypt volumes, you must have an {{site.data.keyword.keymanagementservicelong_notm}} service instance and a Kubernetes secret that holds your customer root key. For more information about how to set up encryption for Portworx volumes, see [Encrypting your Portworx volumes](/docs/containers?topic=containers-portworx#encrypt_volumes).
      • false: When you enter false, your Portworx volumes are not encrypted.
      If you do not specify this option, your Portworx volumes are not encrypted by default. Note: You can choose to enable volume encryption in your PVC, even if you disabled encryption in your storage class. The setting that you make in the PVC take precedence over the settings in the storage class.
      parameters.priority_io Enter the Portworx I/O priority that you want to request for your data. Available options are `high`, `medium`, and `low`. During the setup of your Portworx cluster, every disk is inspected to determine the performance profile of the device. The profile classification depends on the network bandwidth of your worker node and the type of storage device that you have. Disks of SDS worker nodes are classified as `high`. If you manually attach disks to a virtual worker node, then these disks are classified as `low` due to the lower network speed that comes with virtual worker nodes.

      When you create a PVC with a storage class, the number of replicas that you specify in parameters/repl takes precedence over the I/O priority. For example, when you specify three replicas that you want to store on high-speed disks, but you have only one worker node with a high-speed disk in your cluster, then your PVC creation still succeeds. Your data is replicated across both high and low speed disks.
      parameters.shared Define whether you want to allow multiple pods to access the same volume. Choose between the following options:
      • True: If you set this option to true, then you can access the same volume by multiple pods that are distributed across worker nodes in different zones.
      • False: If you set this option to false, you can access the volume from multiple pods only if the pods are deployed onto the worker node that attaches the physical disk that backs the volume. If your pod is deployed onto a different worker node, the pod cannot access the volume.
    2. Create the storage class.

      kubectl apply -f storageclass.yaml
      

      {: pre}

    3. Verify that the storage class is created.

      kubectl get storageclasses
      

      {: pre}

  3. Create a persistent volume claim (PVC).

    1. Create a configuration file for your PVC.

      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: mypvc
      spec:
        accessModes:
          - <access_mode>
        resources:
          requests:
            storage: <size>
        storageClassName: portworx-shared-sc
      

      {: codeblock}

      Understanding the YAML file components
      Idea icon Understanding the YAML file components
      metadata.name Enter a name for your PVC, such as mypvc.
      spec.accessModes Enter the [Kubernetes access mode ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) that you want to use.
      resources.requests.storage Enter the amount of storage in gigabytes that you want to assign from your Portworx cluster. For example, to assign 2 gigabytes from your Portworx cluster, enter `2Gi`. The amount of storage that you can specify is limited by the amount of storage that is available in your Portworx cluster. If you specified a replication factor in your storage class that is higher than 1, then the amount of storage that you specify in your PVC is reserved on multiple worker nodes.
      spec.storageClassName Enter the name of the storage class that you chose or created earlier and that you want to use to provision your PV. The example YAML file uses the portworx-shared-sc storage class.
    2. Create your PVC.

      kubectl apply -f pvc.yaml
      

      {: pre}

    3. Verify that your PVC is created and bound to a persistent volume (PV). This process might take a few minutes.

      kubectl get pvc
      

      {: pre}

Mounting the PVC to your app

{: #px-mount-pvc}

To access the storage from your app, you must mount the PVC to your app. {: shortdesc}

  1. Create a configuration file for a deployment that mounts the PVC.

    For tips on how to deploy a stateful set with Portworx, see StatefulSets External link icon. The Portworx documentation also includes examples for how to deploy Cassandra External link icon, Kafka External link icon, ElasticSearch with Kibana External link icon, and WordPress with MySQL External link icon. {: tip}

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: <deployment_name>
      labels:
        app: <deployment_label>
    spec:
      selector:
        matchLabels:
          app: <app_name>
      template:
        metadata:
          labels:
            app: <app_name>
        spec:
          schedulerName: stork
          containers:
          - image: <image_name>
            name: <container_name>
        securityContext:
              fsGroup: <group_ID>
            volumeMounts:
            - name: <volume_name>
              mountPath: /<file_path>
          volumes:
          - name: <volume_name>
            persistentVolumeClaim:
              claimName: <pvc_name>
    

    {: codeblock}

    Understanding the YAML file components
    Idea icon Understanding the YAML file components
    metadata.labels.app A label for the deployment.
    spec.selector.matchLabels.app
    spec.template.metadata.labels.app
    A label for your app.
    template.metadata.labels.app A label for the deployment.
    spec.schedulerName Use [Stork ![External link icon](../icons/launch-glyph.svg "External link icon")](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/stork/) as the scheduler for your Portworx cluster. With Stork, you can co-locate pods with their data, provides seamless migration of pods in case of storage errors and makes it easier to create and restore snapshots of Portworx volumes.
    spec.containers.image The name of the image that you want to use. To list available images in your {{site.data.keyword.registryshort_notm}} account, run ibmcloud cr image-list.
    spec.containers.name The name of the container that you want to deploy to your cluster.
    spec.containers.securityContext.fsGroup Optional: To access your storage with a non-root user, specify the [security context ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for your pod and define the set of users that you want to grant access in the `fsGroup` section on your deployment YAML. For more information, see [Accessing Portworx volumes with a non-root user ![External link icon](../icons/launch-glyph.svg "External link icon")](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-pvcs/access-via-non-root-users/).
    spec.containers.volumeMounts.mountPath The absolute path of the directory to where the volume is mounted inside the container. If you want to share a volume between different apps, you can specify [volume sub paths ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath) for each of your apps.
    spec.containers.volumeMounts.name The name of the volume to mount to your pod.
    volumes.name The name of the volume to mount to your pod. Typically this name is the same as volumeMounts/name.
    volumes.persistentVolumeClaim.claimName The name of the PVC that binds the PV that you want to use.
  2. Create your deployment.

    kubectl apply -f deployment.yaml
    

    {: pre}

  3. Verify that the PV is successfully mounted to your app.

    kubectl describe deployment <deployment_name>
    

    {: pre}

    The mount point is in the Volume Mounts field and the volume is in the Volumes field.

     Volume Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-tqp61 (ro)
          /volumemount from myvol (rw)
    ...
    Volumes:
      myvol:
        Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:	mypvc
        ReadOnly:	false
    

    {: screen}

  4. Verify that you can write data to your Portworx cluster.

    1. Log in to the pod that mounts your PV.

      kubectl exec <pod_name> -it bash
      

      {: pre}

    2. Navigate to your volume mount path that you defined in your app deployment.

    3. Create a text file.

      echo "This is a test" > test.txt
      

      {: pre}

    4. Read the file that you created.

      cat test.txt
      

      {: pre}