It is useful to discover, format, mount, schedule and monitor drives across servers. Since Kubernetes hostPath and local PVs are statically provisioned and limited in functionality, DirectPV was created to address this limitation.
Distributed data stores such as object storage, databases and message queues are designed for direct attached storage, and they handle high availability and data durability by themselves. Running them on traditional SAN or NAS based CSI drivers (Network PV) adds yet another layer of replication/erasure coding and extra network hops in the data path. This additional layer of disaggregation results in increased-complexity and poor performance.
DirectPV is designed to be lightweight and scalable to tens of thousands of drives. It is made up of three components - Controller, Node Driver, UI.
When a volume claim is made, the controller provisions volumes uniformly from a pool free drives. DirectPV is aware of pod's affinity constraints, and allocates volumes from drives local to pods. Note that only one active instance of controller runs per cluster.
Node Driver implements the volume management functions such as discovery, format, mount, and monitoring of drives on the nodes. One instance of node driver runs on each of the storage servers.
Storage Administrators can use the kubectl CLI plugin to select, manage and monitor drives. Web based UI is currently under development.
Install DirectPV Krew plugin.
kubectl krew install directpv
Install DirectPV in your Kubernetes cluster.
kubectl directpv install
Get information of the installation.
kubectl directpv info
Discover and add drives for volume scheduling.
# Discover drives to check the available devices in the cluster to initialize # The following command will create an init config file (default: drives.yaml) which will be used for initialization
kubectl directpv discover# Review the drives.yaml for drive selections and initialize those drives
kubectl directpv init drives.yaml
(NOTE: XFS is the filesystem used for formatting the drives here)
Get list of added drives.
kubectl directpv list drives
Deploy a demo MinIO server.
DirectPV enforces node constraints where it allocates storage based on the worker node where a pod deploys. If the pod deploys to a worker node with no or insufficient DirectPV-managed drives, DirectPV cannot allocate storage to that pod. DirectPV does not allocate storage from one node to a pod on another node.
Modify the YAML to reflect the node and storage distribution of your Kubernetes cluster.
# This should create MinIO pods and PVCs using the `directpv-min-io` storage class
kubectl apply -f functests/minio.yaml
For air-gapped setups and advanced installations, please refer to the installation guide.
Firstly, it is required to uninstall older version of DirectPV. Once it is uninstalled, follow Installation instructions to install the latest DirectPV. In this process, all existing drives and volumes will be migrated automatically.
Refer the following steps for upgrading DirectPV using krew.
# Uninstall existing DirectPV installation
kubectl directpv uninstall# Upgrade directpv plugin via krew
kubectl krew upgrade directpv# Install latest DirectPV
kubectl directpv install
For migrating from older versions < v3.2.0, Please refer the upgrade guide.
Please review the security checklist before deploying to production.
Important: Report security issues to firstname.lastname@example.org. Please do not report security issues here.
DirectPV is a MinIO project. You can contact the authors over the Slack channel:
DirectPV is released under GNU AGPLv3 license. Please refer to the LICENSE document for a complete copy of the license.