This is a blog post originally from Jeff Slapp, director, systems engineering and solution architecture at DataCore Software. See his full post here: https://www.linkedin.com/pulse/hyper-converged-noun-verb-jeffrey-slapp
INTRODUCTION
Interesting question. I have spoken to many people over the years who perhaps unknowingly default to the "noun-approach" when it comes to hyper-convergence. That is to say, making hyper-convergence something to be held rather than something to be done, as if you can walk into a store and buy some hyper-convergence. Perhaps this is because of the endless marketing we are bombarded with which pushes the concept of a very hardware-centric approach to hyper-convergencefrom the vendor rather than allowing the client to decide what it means to them or how it is to be accomplished given their unique set of applications and requirements.
Please do not misunderstand, I am not implying you don't need hardware, you do and today's hardware capabilities are simply amazing (as we will see). What I am saying and what I will show is that the true benefits are realized when the hardware is loosely coupled with the software which drives it. When you decide to hyper-converge (verb) it will be important to focus on the software used to accomplish it.
TO HYPER-CONVERGE OR NOT HYPER-CONVERGE... THAT IS THE QUESTION.
Before we get into the various ways of accomplishing hyper-convergence, let's first discuss the why. Why would an organization be compelled to deploy a hyper-converged architecture? Does this architecture really simplify things in the long run or is it simply shifting the problem to somewhere else? In order to answer these questions, we must have a basic definition of what hyper-convergence is:
"Hyper convergence (HC) is a type of software-defined architecture which tightly integrates compute, storage, and networking technologies into a commodity hardware chassis."
This is a general definition, but notice the key action word in there: integrates. What is being implied here is the direct coupling of compute and storage. One could logically conclude there are advantages in the form of cost savings, space savings, and reduction in overall physical complexity. However, I will add one more advantage which is not commonly attributed to HC and it is higher application performance.
"The reason high performance is not normally attributed to HC is due to the bottleneck which exists at the storage layer of the equation. Storage is the limiting factor in how far you can take HC or even if you deploy HC at all, because it is the most handicapped component in the stack."
[For more information on the storage bottleneck and why this is critically important to application density and performance, see: Parallel Application Meets Parallel Storage .]
Many times with technology, unless there is an extreme increase in efficiency somewhere in the system, a gain in one area will not usually overcome the loss in another. The net-net of the equation remains the same. However, if there is an extreme increase in efficiency then this is where things get really interesting. We will explore this shortly.
OK, I WANT TO HYPER-CONVERGE, BUT HOW?
There are plenty of companies out there who will tell you how to achieve a software-defined hyper-converged datacenter using their hardware box, but it will be on their terms within their boundaries. This will generally sound something like:
- you must run this specific hypervisor
- you must have these specific storage devices
- you must run on this specific model of hardware
- you must have this specific type of network
- you must have a certain number of our nodes
- oh, and by the way, when the hardware you purchased in the so-called "software-defined solution" reaches end of life, you must purchase the hardware and software all over again from us.
But wait, I thought hyper-convergence was in the category of "software-defined"? Doesn't it essentially mean the software and hardware layers are independent? If not, then nothing has changed really and all the industry has done is take the same old hardware based model (now with a specialized software layer) and repackage it as a software-defined solution only to lock us into the same hardware restrictions and limitations we have been dealing with for decades. Confused yet? Yeah, me too.
"In order to be truly software-defined, or what I like to call 'software-driven', the software must be loosely coupled with the hardware. This means the two are free to advanced independently of one another and when necessary, allow the relocation of the already-purchased-software from an older hardware platform to a newer one without needing to repurchase it all over again."
An alternative to the hardware-centric software-defined storage model we have today would be to adopt a piece of software which co-exists with all your other software while providing unmatched application density across any hardware platform (both server and storage alike).
IS HYPER-CONVERGENCE AN ALL-OR-NOTHING PROPOSITION?
If you have the opportunity to consolidate your entire enterprise into an HC architecture, great. Many times this is not the case. Each HC solution has a specific set of operational and functional boundaries. These vendor-imposed boundaries end up forming what software-defined principles were established to avoid, islands. For example, what happens when you purchase an HC solution, but you cannot deploy all your applications to it? You have created an island. And now you need to maintain a different solution for the rest of the architecture which couldn't be hyper-converged. So the net-net result is zero in terms of all the costs involved (capital, operational, and management), in fact in some cases the net-net may be negative.
However, if there was a solution which could unify local HC applications as well as non-HC applications, that would be something interesting. As it turns out there is such a solution and it is a variant of the hyper-converged model which I call hybrid-converged.
Storage is the principal focus here because in a mixed HC and non-HC environment, the compute layers are already separate with the network bridging the two, but storage is the component which can still be maintained as consolidated or converged without the need for two different storage solutions to serve both models.
"The logical implication of hybrid-converged is simply the ability to serve storage to applications in a HC model while at the same time providing storage for those external applications which cannot be hyper-converged. With this model, you no longer need two different storage solutions. Hence, the extreme increase in efficiency which I spoke of earlier has just entered the room."
OK, WHAT DOES THIS HYBRID MODEL LOOK LIKE?
It really doesn't look much different than what you are already familiar with, but what is important to note here is this: in order to pull this off you must have an intelligence which does a very good job at the one thing which has handicapped HC architectures up until this point, and that is to provide ultra-high performance storage services.
You cannot buy hyper-convergence from DataCore or anyone else for that matter. Remember, it's not something you acquire, it's something you do. However, DataCore does allow you to hyper-convergehowever and whatever you would like. Additionally, because the storage bottleneck has been removed, DataCore also allows you to deploy in a hybrid-converged model without slowing down and without requiring two separate storage platforms for each model.
Please refer to Jeff Slapp’s original blog post to see a few examples of what this can look like:https://www.linkedin.com/pulse/hyper-converged-noun-verb-jeffrey-slapp .
WHAT MADE THIS POSSIBLE?
This new model became apparent during the DataCore world-record SPC-1 run. During this run, while DataCore maintained an average of 5.12 million SPC-1 IOps, the CPUs on the DataCore engines were only at 50% utilization. This meant there was plenty of CPU power left to run other applications locally while serving storage out to the rest of the network/fabric.
While it is unlikely a single or even many applications simultaneously will drive millions of IOps, this demonstrates the amount of raw untapped power which is available within today's multicore architectures. No change to the underlying hardware framework is needed; all that is needed to harness the power is the right software. [See: Disk Is Dead? Says Who?]
CONCLUSION: THE PERPETUAL PENDULUM SWING
The decision to hyper-converge is a decision you must make. How you hyper-converge is also a decision you must make, not your vendor. If you decide to hyper-converge look long and hard at the options available to you. Also consider how the landscape is constantly changing and rapidly, right before our eyes. We now live in an intensely software-driven world.
Today there are three primary deployment models: traditional, converged, and hyper-converged. DataCore has opened the door to a fourth: hybrid-converged. Ensure the solution you deploy allows you to not only achieve your goals today, but also allows for the adoption of future models easily and cost effectively.