Currently, when node enters NotReady state and kubernetes-shceduler tries to replace it on other nodes, the carina scheduler will bind it the the-notready-node again. This works fine if it's just the pod fails and the newly created pod will reuse the local volume again. But if the node is indeed failed, we should reschedule the pod to give it another change, although the newly borned pod will have an empty volume.
This will fix #14.
Use helm chart for ease of installation、uninstallation、upgrade。
Currently, carina groups disks with its type. However, some workloads may prefer using spereated disks against others. For now the capacity and allocatable resources will remain the same.
This will fix #10.
Provides raw disk or partitions to workload, without LVM management. For example, user may request a raw disk exclusively, or part of disk.
Using velero to backup carina PV to S3.
Using RAID to manage disks on baremetal. User can configure RAID level due to needs. When disk fails, carina can find the failed disk and try to rebuild the RAID if new disk is plugged in.
support NVME disks
Carina should get SMART info for HDD and SSD devices. Issue a warning if found bad sectors or SSD is dying.
Report raw disk and PV's comprehensive metrics, likes IOPS、bandwidth、iotop and so on.
User can use annottion to enable PVC auto sizeing. So if one PV is 80% full, carina will automatically expanding it without user intervention.
Currently carina scheduleing based on node's capacity and allocatable resources. However, node's load maybe very heavy while it's having lots of free disk spaces. Carina should be load-aware.
Carina should support cgroup V2 for disk throttling to have better experience for buffered IO.
Ensure read out what exactly been written.
Some workload may prefer safety with performance.
When node's load becomes really heavy, carina can evict some workload with lower priority. The workload priority is same with kubernetes defines.