A hybrid cloud computing architecture optimizes for consistent performance, security
and economics. Any discussion of the hybrid cloud needs to start with a definition.
It is more than just the public cloud and on-prem.
This is an increasingly large field but starts with AWS, Azure, GCP, IBM, Alibaba, Tencent and government clouds. Your hybrid cloud storage software needs to run everywhere your application stack runs. Even companies that claim to run on a single cloud don’t - there are always other clouds.
Kubernetes is the primary software architecture for the modern private cloud. This includes all Kubernetes distributions, such as VMware (Tanzu), RedHat (OpenShift), Rancher/SUSE, HP (Ezmeral) and Cisco (IKE). To fully leverage Kubernetes, a hybrid cloud storage solution must be object storage, software defined and cloud-native. The private cloud also includes more traditional bare-metal instances, but enterprise workloads are increasingly containerized and orchestrated.
The edge is about moving compute to where the data is produced. Once processed, data is then moved to more centralized locations in the hybrid cloud. Edge storage solutions must be lightweight, powerful, cloud-native and resilient to run in this architecture. It’s very difficult to do this well, which is why so few vendors discuss it, they don’t have a good answer - even Amazon.
Hybrid cloud storage follows the model established in the public cloud, and public cloud providers have unanimously adopted cloud-native object storage. The success of the public cloud effectively rendered file and block storage obsolete. Every new application is written for the AWS S3 API - not POSIX. In order to scale and perform like cloud-native technologies, older applications must be re-written for the S3 API and refactored into microservices to be container compatible.
Kubernetes-native design requires an operator service to provision and manage a multi-tenant object-storage-as-a-service infrastructure. Each of these tenants run in their own isolated namespace while sharing the underlying hardware resources. The operator pattern extends Kubernetes's familiar declarative API model with custom resource definitions (CRDs) to perform common operations like resource orchestration, non-disruptive upgrades, cluster expansion and to maintain high-availability.
MinIO is purpose-built to take full advantage of the Kubernetes architecture. Since the server binary is fast and lightweight, MinIO's operator is able to densely co-locate several tenants without running out of resources. Retrofitting a bare-metal deployment or storage appliance outside of the Kubernetes environment simply defeats all the benefits Kubernetes has to offer.
Hybrid cloud storage must be consistent across API compatibility, performance, security and compliance. It needs to perform consistently and independently from the underlying hardware. Any variation, even a tiny one, can break an application - creating massive operational burdens.
Because MinIO is so lightweight, we can roll out updates across public, private and edge in minutes, maintaining the same consistent experience. MinIO abstracts the underlying differences across these architectures including key management, identity management, access policies and hardware/OS differences.
Since object storage is utilized as both primary and secondary storage, it needs to deliver performance at scale. From mobile/web applications to AI/ML, data-heavy workloads require exceptional performance from the underlying object storage. Even data protection workloads require high-performance access for de-dupe and snapshots. No enterprise can afford a slow restore process. Traditionally, these workloads required bare-metal performance. Now it is possible to containerize all of these workloads - as demonstrated by the success of public cloud providers.
MinIO is the world’s fastest object store with READ/WRITE speeds of 183 GB/s and 171 GB/s on NVMe and 11 GB/s and 9 GB/s on HDD. At those speeds every workload is within reach on every hybrid cloud infrastructure.
Many people think that scale simply refers to how big a system can get. What is lost in this thinking, however, is the importance of operational efficiency as the environment grows. A hybrid cloud object storage solution must scale efficiently and transparently regardless of the underlying environment, and do so simply with minimal human interaction and maximum automation. This can only be accomplished by an API-driven platform built on top of a simple architecture.
MinIO’s relentless focus on simplicity means that large scale, multi-petabyte data infrastructure can be managed with minimal human resources. This is a function of APIs and automation, and creates an environment on which to create significant scale.
To be successful in the hybrid cloud, storage must be software defined. The reasons are simple. A hardware appliance does not run on a public cloud or on Kubernetes. A public cloud storage service offering is not designed to run on other public clouds, private clouds or Kubernetes platforms. Even if they did, bandwidth would cost more than the storage because they weren’t developed to replicate across networks. True, software-defined storage can run in the public cloud, private cloud and at the edge.
MinIO was born as software and is portable across a variety of operating systems and hardware architectures. Evidence can be found in our 7.7M IPs running across AWS, GCP and Azure.