- DirectPV installation fails in my Kubernetes. Why?
- After upgrading DirectPV to v4.x.x, I do not find
direct-csi-min-io
storage class. Why? - In the YAML output of
discover
command, I do not find my storage drive(s). Why? - Do you support SAN, NAS, iSCSI, network drives etc.,?
- Do you support LVM, Linux RAID, Hardware RAID, Software RAID etc.,?
- Is LUKS device supported?
- I am already using Local Persistent Volumes (Local PV) for storage. Why do I need DirectPV?
- I see
no drive found ...
error message in my Persistent Volume Claim. Why? - I see Persistent Volume Claim is created, but respective DirectPV volume is not created. Why?
- I see volume consuming Pod still in
Pending
state. Why? - I see
volume XXXXX is not yet staged, but requested with YYYYY
error. Why? - I see
unable to find device by FSUUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx; either device is removed or run command `sudo udevadm control --reload-rules && sudo udevadm trigger` on the host to reload
error. Why?
You need to have necessary privileges and permissions to perform installation. Go though the specifications documentation. For Red Hat OpenShift, refer to the OpenShift specific documentation.
Legacy DirectCSI is deprecated including storage class direct-csi-min-io
and it is no longer supported. Previously created volumes continue to work normally. For new volume requests, use the directpv-min-io
storage class.
DirectPV installation fails with error message like Error; unable to get legacy drives; conversion webhook for direct.csi.min.io/v1beta3, ... or similar. Why?
Installing DirectPV also tries to upgrade legacy DirectCSI. Upgrading DirectCSI older than v3.1.0 requires Conversion webhook service running. For appropriate upgrade process, refer this documentation
DirectPV ignores drives that meet any of the below conditions:
- The size of the drive is less than 512MiB.
- The drive is hidden.
- The drive is read-only.
- The drive is parititioned.
- The drive is held by other devices.
- The drive is mounted or in use by DirectPV already.
- The drive is in-use swap partition.
- The drive is a CDROM.
Check the last column of the discover --all
command output to see what condition(s) exclude the drive. Resolve the conditions and try again.
DirectPV is meant for high performance local volumes with Direct Attached Storage. We do not recommend any remote drives, as remote drives may lead to poor performance.
It works, but we strongly recommend to use raw devices for better performance.
Yes
Local Persistent Volumes are statically created PersistentVolume
which requires administrative skills. Whereas DirectPV dynamically provisions volumes on-demand; they are persistent through pod/node restarts. The lifecycle of DirectPV volumes are managd by associated Persistent Volume Claims (PVCs) which simplifies volume management.
Below is the reason and solution
Reason | Solution |
---|---|
Volume claim is made without adding any drive into DirectPV. | Please add drives. |
No drive has free space for requested size. | 1. Please add new drives. 2. Remove stale volumes. |
Requested topology is not met. | Please modify your Persistent Volume Claim. |
Requested drive is not found on the requested node. | Please modify Persistent Volume Claim. |
Requested node is not DirectPV node. | Please modify Persistent Volume Claim. |
DirectPV comes with WaitForFirstConsumer volume binding mode i.e. Pod consuming volume must be scheduled first.
- If you haven't created the respective Persistent Volume Claim, create it.
- You may be facing Kubernetes scheduling problem. Please refer to the Kubernetes documentation on scheduling
According to CSI specification, Kubelet
should call StageVolume
RPC first, then PublishVolume
RPC next. In a rare event, StageVolume
RPC is not fired/called, but PublishVolume
RPC is called. Please restart your Kubelet and report this issue to your Kubernetes provider.
I see unable to find device by FSUUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx; either device is removed or run command `sudo udevadm control --reload-rules && sudo udevadm trigger` on the host to reload
error. Why?
In a rare event, Udev
in your system missed updating /dev
directory. Please run command sudo udevadm control --reload-rules && sudo udevadm trigger
and report this issue to your OS vendor.