{product-title} can be configured to access Local Volumes for application data.
Local volumes are PersistentVolumes(PV) representing locally-mounted filesystems. In the future, they may be extended to raw block devices.
Local volumes are different from HostPath. They have a special annotation that makes any Pod that uses the PV to be scheduled on the same node where the local volume is mounted.
In addition, Local volume includes a provisioner that automatically creates PVs for locally mounted devices. This provisioner is currently limited and it only scans pre-configured directories. It cannot dynamically provision volumes, which may be implemented in a future release.
The local volume provisioner allows using local storage within {product-title} and supports:
-
Volumes
-
Persistent Volumes
Note
|
Local volumes is an alpha feature and may change in a future release of {product-title}. |
Enable the PersistentLocalVolumes
feature gate on all masters and nodes.
-
Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and add
PersistentLocalVolumes=true
under theapiServerArguments
andcontrollerArguments
sections:apiServerArguments: feature-gates: - PersistentLocalVolumes=true ... controllerArguments: feature-gates: - PersistentLocalVolumes=true ...
-
On all nodes, edit or create the node configuration file (/etc/origin/node/node-config.yaml by default) and add
PersistentLocalVolumes=true
fetaure gate underkubeletArguments
.kubeletArguments: feature-gates: - PersistentLocalVolumes=true
All local volumes must be manually mounted before they can be consumed by {product-title} as persistent volumes.
All volumes must be mounted into the
/mnt/local-storage/<storage-class-name>/<volume> path. The administrators are required to create the local devices as needed (by using any method such as
a disk partition or an LVM), create suitable filesystems on these devices, and mount them by a script or by /etc/fstab
entries.
/etc/fstab
entries:# device name # mount point # FS # options # extra
/dev/sdb1 /mount/local-storage/ssd/disk1 ext4 defaults 1 2
/dev/sdb2 /mount/local-storage/ssd/disk2 ext4 defaults 1 2
/dev/sdb3 /mount/local-storage/ssd/disk3 ext4 defaults 1 2
/dev/sdc1 /mount/local-storage/hdd/disk1 ext4 defaults 1 2
/dev/sdc2 /mount/local-storage/hdd/disk2 ext4 defaults 1 2
{product-title} depends on an external provisioner to create persistent volumes for local devices and to clean them up when they are not needed (to enable reuse).
Note
|
|
This external provisioner needs to be configured by using a ConfigMap
to relate directores with StorageClasses. This configuration must be created before the provisioner is deployed.
Note
|
(Optional) Create a standalone namespace for local volume provisioner and its configuration, for example:
|
kind: ConfigMap
metadata:
name: local-volume-config
data:
"local-ssd": | (1)
{
"hostDir": "/mnt/local-storage/ssd", (2)
"mountDir": "/mnt/local-storage/ssd" (3)
}
"local-hdd": |
{
"hostDir": "/mnt/local-storage/hdd",
"mountDir": "/mnt/local-storage/hdd"
}
-
Name of the StorageClass.
-
Path to the directory on the host. It must be a subdirectory of /mnt/local-storage.
-
Path to the directory in the provisioner pod. We recommend using the same directory structure as used on the host.
With this configuration the provisioner creates:
-
One PV with StorageClass
local-ssd
for every subdirectory in /mnt/local-storage/ssd. -
One PV with StorageClass
local-hdd
for every subdirectory in /mnt/local-storage/hdd.
Note
|
Before starting the provisioner, mount all local devices and create a ConfigMap with storage classes and their directories. |
Install the local provisioner from the local-storage-provisioner-template.yaml file.
-
Create a service account that allows running pods as a root user and use HostPath volumes:
$ oc create serviceaccount local-storage-admin $ oc adm policy add-scc-to-user hostmount-anyuid -z local-storage-admin
Root privileges are required for the provisioner pod for allowing it to delete content on local volumes. HostPath is required to access the /mnt/local-storage path on the host.
-
Install the template:
$ oc create -f https://raw.githubusercontent.com/jsafrane/origin/local-storage/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
-
Instantiate the template by specifying values for
configmap
andaccount
parameters:$ oc new-app -p CONFIGMAP=local-volume-config \ -p SERVICE_ACCOUNT=local-storage-admin \ -p NAMESPACE=local-storage local-storage-provisioner
See the template for other configurable options. This template creates a DaemonSet that runs a Pod on every node. The Pod watches directories specified in the
ConfigMap
and creates PVs for them automatically.The provisioner runs as root to be able to clean up the directories when a PV is released and all data need to be removed.
Adding a new device requires several manual steps:
-
Stop DaemonSet with the provisioner.
-
Create a subdirectory in the right directory on the node with the new device and mount it there.
-
Start the DaemonSet with the provisioner.
Important
|
Omitting any of these steps may result in a wrong PV being created. |