The Need to Disaggregate Compute and Storage for HCI • The Registry

0

Paid function Let’s start by remembering why hyperconverged infrastructure, or HCI for short, was so revolutionary for IT customers.

With early HCI platforms, mostly based on Nutanix or VMware software, virtual block storage was converged on virtual compute servers to create a different abstraction and a unified set of infrastructure that still offered that feeling of appliance. This allowed customers to rationalize a fragmented stack of server virtualization, storage virtualization, and network virtualization and merge them into a single management framework and application execution environment.

What the early HCI platforms offered, for the most part, was the same level of virtual compute and virtual storage usability that cloud builders had created, to make it easier to deploy applications on their massive infrastructures.

But there was a scale problem. Originally considered a smart architectural design and deployment methodology, it gradually became apparent that early HCI platforms were inherently flawed.

With these early platforms, compute and storage ratios were node-specific. Granted, HCI rig vendors had different node configurations – some compute-heavy, storage-heavy, and some balanced in the middle – but they still had to plan capacity for the peaks of one. and on the other. this meant that there was often a lot of capacity tied up in some of the nodes.

This added complexity in solution design has led to new performance challenges, as storage-optimized nodes often introduce latency and bottlenecks into the environment.

As HPE tells us, the result has been unnecessary overprovisioning, as customers rarely have a balanced computing fleet of storage, CPU, or memory. “We often see storage as the root of the imbalance here, causing customers to purchase more HCI nodes (with storage, memory, and CPU) to meet their desired storage needs. For example, deduplication/ Compression often resulted in performance hits requiring more CPU and memory, compounding the challenge that customers saw having to purchase more powerful processors and more memory to cope with this unforeseen overhead.

This impedance mismatch between compute and storage in early HCI architectures has been a problem from the start, and moving away from appliances to software-only HCI doesn’t really solve the problem. Companies will always end up installing a mix of nodes with different compute and storage capacities, which means that their HCI clusters may not be as smooth in meeting application needs as they would like. In fact, this impedance mismatch between compute and storage in legacy HCI platforms is getting worse.

Matt Shore, HCI business development manager for data and storage solutions for EMEA at HPE, notes that heavy workloads are deployed on HCI platforms that create “mega-VMs” or “machines virtual monsters” that are not able to get the correct compute and storage allocation. And mega-VMs are power-hungry because they can hog a cluster, forcing it to run only one VM or application. If the workload is storage heavy and not CPU/memory heavy, the clusters CPU/memory resources will sit idle to provide only the storage element.

For example, mission-critical applications running on Oracle relational databases or SAP HANA in-memory databases are very I/O-intensive and usually always so, which often means that the virtual machines that support them load have an overprovision of compute and storage capacity. Then there are applications, such as end-of-day, week-end, or month-end batch readings or reporting tasks that require I/O spikes at regular – and thankfully predictable – times. And then there are application development and test environments, increasingly with high and often unpredictable storage I/O demands. The high I/O demands of these three elements mean that nodes in an HCI 1.0 style cluster must be overprovisioned.

Compute and storage disaggregation

The answer to this problem, which HPE calls HCI 2.0, is to do what hyperscalers and cloud builders do, which is to de-aggregate virtual compute and virtual storage from each other, but to present the combined elements as a single converged element. platform that can scale – but without the premium monthly price that cloud providers charge.

HPE’s HCI 2.0 stack includes its Nimble Storage All-Flash Arrays or their sequel Alletra 6000 All-Flash Arrays for block storage underpinning server virtualization hypervisors on compute nodes, in this case various ProLiant servers running the ESXi hypervisor and VMware’s vCenter management stack. (Specifically, vCenter Standard Edition, either versions 6.7 or 7.0.)

The stack includes a storage software abstraction layer outside of the Nimble Storage organization, called dHCI, and also requires HPE’s InfoSight Management Console, which is common to HPE servers and storage. This has been infused with various types of artificial intelligence and comes from the Acquisition of Nimble Storage also.

Each customer has their own reason for deploying dHCI and the benefits they derive from the move also vary. In some cases, customers moving to the dHCI stack were getting rid of aging equipment due to the increased need to support virtual workers during the coronavirus pandemic, while reducing the cost of infrastructure support and reducing downtime.

This was the case for Highmark Credit Union in the United States, for example. PetSure, which provides pet insurance, deployed remote veterinary application software on a dHCI stack and was able to cut operating costs in half, double the performance of virtual desktop infrastructure middleware ( VDI) by a factor of 2 and deliver applications in half the time.

After the coronavirus pandemic, National Tree Society had to move its business online and needed to speed up processes to meet growing demand; it can process orders in 20 minutes instead of overnight and increase production by 70% while eliminating backorders.

the Institute of Engineering and Technology, which is a charity that supports engineering education, replaced its aging legacy infrastructure, supporting 160 virtual machines on six servers, with a dHCI stack, and immediately reduced storage requirements for these applications by 27% by due to over-provisioning.

The case studies above illustrate some of the ways HPE has removed the limitations of traditional HMI. By enabling customers to scale compute, storage, and memory independently, HPE’s dHCI storage hardware and management software delivers significant performance and cost benefits – and no more overprovisioning. Or as HPE puts it, “HCI 2.0 delivers a better HCI experience without any compromise.”

Sponsored by HPE.

HPE has produced a short video that introduces HPE Data Services Console. You can watch it here:

Youtube video

Share.

Comments are closed.