Expand a MinIO Tenant
This procedure documents expanding the available storage capacity of an existing MinIO tenant by deploying an additional pool of MinIO pods in the Kubernetes infrastructure.
Important
The MinIO Operator Console is deprecated and removed in Operator 6.0.0.
See Modify a MinIO Tenant for instructions on migrating Tenants installed via the Operator Console to Kustomization.
Prerequisites
MinIO Kubernetes Operator
This procedure on this page requires a valid installation of the MinIO Kubernetes Operator and assumes the local host has a matching installation of the MinIO Kubernetes Operator. This procedure assumes the latest stable Operator, version 6.0.4.
See Deploy the MinIO Operator for complete documentation on deploying the MinIO Operator.
Available Worker Nodes
MinIO deploys additional minio server
pods as part of the new Tenant pool.
The Kubernetes cluster must have sufficient available worker nodes on which to schedule the new pods.
The MinIO Operator provides configurations for controlling pod affinity and anti-affinity to direct scheduling to specific workers.
Persistent Volumes
Exclusive access to drives
MinIO requires exclusive access to the drives or volumes provided for object storage. No other processes, software, scripts, or persons should perform any actions directly on the drives or volumes provided to MinIO or the objects or files MinIO places on them.
Unless directed by MinIO Engineering, do not use scripts or tools to directly modify, delete, or move any of the data shards, parity shards, or metadata files on the provided drives, including from one drive or node to another. Such operations are very likely to result in widespread corruption and data loss beyond MinIO’s ability to heal.
MinIO can use any Kubernetes Persistent Volume (PV) that supports the ReadWriteOnce access mode.
MinIO’s consistency guarantees require the exclusive storage access that ReadWriteOnce
provides.
For Kubernetes clusters where nodes have Direct Attached Storage, MinIO strongly recommends using the DirectPV CSI driver. DirectPV provides a distributed persistent volume manager that can discover, format, mount, schedule, and monitor drives across Kubernetes nodes. DirectPV addresses the limitations of manually provisioning and monitoring local persistent volumes.
Procedure
The MinIO Operator supports expanding a MinIO Tenant by adding additional pools.
Review the Kustomization object which describes the Tenant object (
tenant.yaml
).The
spec.pools
array describes the current pool topology.Add a new entry to the
spec.pools
array.The new pool must reflect your intended combination of Worker nodes, volumes per server, storage class, and affinity/scheduler settings. See MinIO Custom Resource Definition for more complete documentation on Pool-related configuration settings.
Apply the updated Tenant configuration
Use the
kubectl apply
command to update the Tenant:kubectl apply -k ~/kustomization/TENANT-NAME
Modify the path to the Kustomization directory to match your local configuration.
Review the Helm
values.yaml
file.The
tenant.pools
array describes the current pool topology.Add a new entry to the
tenant.pools
array.The new pool must reflect your intended combination of Worker nodes, volumes per server, storage class, and affinity/scheduler settings. See Tenant Helm Charts for more complete documentation on Pool-related configuration settings.
Apply the updated Tenant configuration
Use the
helm upgrade
command to update the Tenant:helm upgrade TENANT-NAME minio-operator/tenant -f values.yaml -n TENANT-NAMESPACE
The command above assumes use of the MinIO Operator Chart repository. If you installed the Chart manually or by using a different repository name, specify that chart or name in the command.
Replace
TENANT-NAME
andTENANT-NAMESPACE
with the name and namespace of the Tenant respectively. You can usehelm list -n TENANT-NAMESPACE
to validate the Tenant name.
You can use the kubectl get events -n TENANT-NAMESPACE --watch
to monitor the progress of expansion.
The MinIO Operator updates services to route connections appropriately across the new nodes.
If you use customized services, routes, ingress, or similar Kubernetes network components, you may need to update those components for the new pod hostname ranges.
Decommission a Tenant Server Pool
Decommissioning a server pool involves three steps:
Run the
mc admin decommission start
command against the TenantWait until decommissioning completes
Modify the Tenant YAML to remove the decommissioned pool
When removing the Tenant pool, ensure the spec.pools.[n].name
fields have values for all remaining pools.
Maintain pool order when decommissioning and then adding
If you decommission one pool in a multiple pool deployment, you cannot use the same node sequence for a new pool. For example, consider a deployment with the following pools:
https://minio-{1...4}.example.net/mnt/drive-{1...4}
https://minio-{5...8}.example.net/mnt/drive-{1...4}
https://minio-{9...12}.example.net/mnt/drive-{1...4}
If you decommission the minio-{5...8}
pool, you cannot add a new pool with the same node numbering.
You must add the new pool after minio-{9...12}
:
https://minio-{1...4}.example.net/mnt/drive-{1...4}
https://minio-{9...12}.example.net/mnt/drive-{1...4}
https://minio-{13...16}.example.net/mnt/drive-{1...4}
Important
You cannot reuse the same pool name or hostname sequence for a decommissioned pool.