Documentation

Hardware Checklist

Use the following checklist when planning the hardware configuration for a production, distributed MinIO deployment.

Considerations

When selecting hardware for your MinIO implementation, take into account the following factors:

  • Expected amount of data in tebibytes to store at launch

  • Expected growth in size of data for at least the next two years

  • Number of objects by average object size

  • Average retention time of data in years

  • Number of sites to be deployed

  • Number of expected buckets

Production Hardware Recommendations

The following checklist follows MinIO’s Recommended Configuration for production deployments. The provided guidance is intended as a baseline and cannot replace MinIO SUBNET Performance Diagnostics, Architecture Reviews, and direct-to-engineering support.

MinIO does not provide hosted services or hardware sales

See our Reference Hardware page for a curated selection of servers and storage components from our hardware partners.

Sufficient CPU cores to achieve performance goals for hashing (for example, for healing) and encryption

MinIO recommends Single Socket Intel® Xeon® Scalable Gold CPUs (minimum 16 cores per socket).

Sufficient RAM to achieve performance goals based on the number of drives and anticipated concurrent requests (see the formula and reference table).

MinIO recommends a minimum of 128GB of memory per node for best performance.

MinIO recommends a minimum of 4 host servers per distributed deployment.

MinIO strongly recommends hardware dedicated to servicing the MinIO Tenant. Colocating multiple high-performance services on the same servers can result in resource contention and reduced overall performance.

MinIO recommends a minimum of 4 locally attached drives per MinIO Server. For better performance and storage efficiency, use 8 or more drives per server.

Use the same type of drive (NVMe, SSD, or HDD) with the same capacity across all nodes in the deployment.

25GbE Network as a baseline
100GbE Network for high performance

Important

The following areas have the greatest impact on MinIO performance, listed in order of importance:

Network Infrastructure

Insufficient or limited throughput constrains performance

Storage Controller

Old firmware, limited throughput, or failing hardware constrains performance and affects reliability

Storage (Drive)

Old firmware, or slow/aging/failing hardware constrains performance and affects reliability

Prioritize securing the necessary components for each of these areas before focusing on other hardware resources, such as compute-related constraints.

The minimum recommendations above reflect MinIO’s experience with assisting enterprise customers in deploying on a variety of IT infrastructures while maintaining the desired SLA/SLO. While MinIO may run on less than the minimum recommended topology, any potential cost savings come at the risk of decreased reliability, performance, or overall functionality.

Networking

MinIO recommends high speed networking to support the maximum possible throughput of the attached storage (aggregated drives, storage controllers, and PCIe busses). The following table provides a general guideline for the maximum storage throughput supported by a given physical or virtual network interface. This table assumes all network infrastructure components, such as routers, switches, and physical cabling, also supports the NIC bandwidth.

NIC Bandwidth (Gbps)

Estimated Aggregated Storage Throughput (GBps)

10GbE

1.25GBps

25GbE

3.125GBps

50GbE

6.25GBps

100GbE

12.5GBps

Networking has the greatest impact on MinIO performance, where low per-host bandwidth artificially constrains the potential performance of the storage. The following examples of network throughput constraints assume spinning disks with ~100MB/S sustained I/O

  • 1GbE network link can support up to 125MB/s, or one spinning disk

  • 10GbE network can support approximately 1.25GB/s, potentially supporting 10-12 spinning disks

  • 25GbE network can support approximately 3.125GB/s, potentially supporting ~30 spinning disks

Memory

Memory primarily constrains the number of concurrent simultaneous connections per node.

You can calculate the maximum number of concurrent requests per node with this formula:

\(totalRam / ramPerRequest\)

To calculate the amount of RAM used for each request, use this formula:

\(((2MiB + 128KiB) * driveCount) + (2 * 10MiB) + (2 * 1 MiB)\)

10MiB is the default erasure block size v1. 1 MiB is the default erasure block size v2.

The following table lists the maximum concurrent requests on a node based on the number of host drives and the free system RAM:

Number of Drives

32 GiB of RAM

64 GiB of RAM

128 GiB of RAM

256 GiB of RAM

512 GiB of RAM

4 Drives

1,074

2,149

4,297

8,595

17,190

8 Drives

840

1,680

3,361

6,722

13,443

16 Drives

585

1,170

2.341

4,681

9,362

The following table provides general guidelines for allocating memory for use by MinIO based on the total amount of local storage on the node:

Total Host Storage

Recommended Host Memory

Up to 1 Tebibyte (Ti)

8GiB

Up to 10 Tebibyte (Ti)

16GiB

Up to 100 Tebibyte (Ti)

32GiB

Up to 1 Pebibyte (Pi)

64GiB

More than 1 Pebibyte (Pi)

128GiB

Storage

MinIO recommends selecting the type of drive based on your performance objectives. The following table highlights the general use case for each drive type based on cost and performance:

NVMe/SSD - Hot Tier HDD - Warm

Type

Cost

Performance

Tier

NVMe

High

High

Hot

SSD

Balanced

Balanced

Hot/Warm

HDD

Low

Low

Cold/Archival

Use the same type of drive (NVME, SSD, HDD) with the same capacity across all nodes in a MinIO deployment. MinIO does not distinguish drive types when using the underlying storage and does not benefit from mixed storage types.

Use the same capacity of drive across all nodes in the MinIO server pool. MinIO limits the maximum usable size per drive to the smallest size in the deployment. For example, if a deployment has 15 10TB drives and 1 1TB drive, MinIO limits the per-drive capacity to 1TB.