It seems as if every few years, there's a new infrastructure approach that promises to revolutionise the enterprise data centre, improve management and dramatically reduce costs along the way, yet seldom do these initial hypes ever live up to all of their promises. According to Dennis Naidoo, regional manager for Sub-Saharan Africa at Tintri, things are no different with conventional hyper-converged infrastructure (HCI) - sometimes described as software-defined IT infrastructure - which, he says, may not actually be best placed to meet the needs of all enterprises or to deliver on its promise of lower costs.
The trouble with HCI, he explains, is that inflexible configuration rules and escalating costs may make it an expensive choice for enterprises. HCI can increase deployment costs in a number of ways, including via a requirement for balanced nodes, through increased software licensing costs and by way of increased storage costs.
"Conventional HCI implementations generally have a requirement for similar CPU, memory and storage configuration on all the nodes in a cluster. This is referred to as a balanced-node configuration and is recommended to maintain consistent performance as it scales. As a result, this leads to over-provisioning of hardware infrastructure.
"Organisations that deploy HCI find they inevitably end up purchasing storage when they need more compute, or purchasing compute when they need more storage. Therefore, it means the business might not only end up spending more as it grows, but it also lowers their virtualisation efficiency by having valuable resources sitting idle," he says.
"It can also add to your licensing costs, since in most implementations, storage on each node is controlled by a dedicated virtual machine. Therefore, the more storage you have, the higher your virtualisation licensing costs will be. Add to that the fact that each node dedicates a portion of its CPU and memory to the VM that is managing the storage on the HCI nodes, and you have superfluous SW licences that need to be paid on CPU that is not available for use".
Naidoo points out that it is also the case that many conventional HCI implementations store multiple copies of each block of data to protect against failures. Naturally this increases the total amount of storage one requires, which again adds to the total overall cost. Finally, he adds, the nature of HCI is such that adopting it means that users get locked into a particular technology stack, which significantly reduces options when it comes to technology choice in the future.
The tightly coupled architecture of HCI also makes it more difficult to troubleshoot performance issues. Because everything is layered together on each node, it becomes almost impossible to isolate the source of a performance bottleneck.
The question then is if HCI has so many drawbacks, why is it so popular? According to Naidoo, customers who find themselves frustrated with using conventional storage solutions in highly virtualised environments do like the ease with which HCI virtualises both compute and storage in a single hardware package, along with the fact that it allows the business to focus on deploying applications without having to focus too much on the hardware running in the background.
"The fact that you can scale by simply adding nodes makes deployment much simpler, although this really only works when you have a very uniform application profile in the environment. Since the applications will have a linear performance demand as it scales, HCI can meet this by adding balanced nodes to the cluster as you grow.
"Where it does not work that effectively is in a typical enterprise with performance demanding requirements, since there is no uniform application profile. Ultimately, in any environment where the customer is growing their storage performance and capacity at a different rate of demand to their compute or vice-versa, HCI creates as many issues as it solves."
On the other hand, he continues, infrastructure that is architected with separate, best-of-breed servers and virtualisation-centric storage, avoids these challenges. With storage independent from compute, it's much easier to get the right mix of resources and to have more flexibility to pick the best compute and storage to support the company's particular workloads.
"Compute and storage get better every year in terms of performance and density, so purchasing them separately gives the business greater flexibility to mix new and old hardware as needed. Most crucially, utilising all-flash storage arrays and cloud management software built on a Web services architecture can provide an enterprise with the building blocks to deliver virtualisation-centric operations with guaranteed performance, in-depth analytics, and a federated scale-out architecture.
"Or to put it simply, adopting this approach of best-of-breed servers and virtualisation-centric storage will deliver the benefits promised from a HCI deployment without any of the challenges mentioned, thereby simplifying the data centre and helping to make autonomous operations a reality," concludes Naidoo.
Share