“It's a real
breakthrough, enabled by folks at DataCore who remember what we were working on
in tech a couple of decades back.”
What's old is new
again. Marty McFly would get it. https://virtualizationreview.com/articles/2015/10/21/back-to-the-future-in-virtualization-and-storage.aspx
If you're on social media this week, you've probably had
your fill of references to Back to the Future, the 1980s scifi comedy
much beloved by those of us who are now in our 50s, and the many generations of
video watchers who have rented, downloaded or streamed the film since. The
nerds point out that the future depicted in the movie, as signified by the date
on the time machine clock in the dashboard of a DeLorean, is Oct. 21, 2015.
That's today, as I write this piece…
Legacy Storage Is Not the Problem
If you stick with x86 and virtualization, you may be concerned about the challenges of achieving decent throughput and application performance, which your hypervisor vendor has lately been blaming on legacy storage. That is usually a groundless accusation. The problem is typically located above the storage infrastructure in the I/O path; somewhere at the hypervisor and application software operations layer.
If you stick with x86 and virtualization, you may be concerned about the challenges of achieving decent throughput and application performance, which your hypervisor vendor has lately been blaming on legacy storage. That is usually a groundless accusation. The problem is typically located above the storage infrastructure in the I/O path; somewhere at the hypervisor and application software operations layer.
To put it simply, hypervisor-based computing is the last
expression of sequentially-executing workload optimized for unicore processors
introduced by Intel and others in the late 70s and early 80s. Unicore
processors with their doubling transistor counts every 24 months (Moore's Law)
and their doubling clock speeds every 18 months (House's Hypothesis) created
the PC revolution and defined the architecture of the servers we use today. All
applications were written to execute sequentially, with some interesting time
slicing created to give the appearance of concurrency and multi-threading.
This model is now reaching end of life. We ran out of
clock speed improvements in the early 2000s and unicore chips became multicore
chips with no real clock speed improvements. Basically, we're back to a
situation that confronted us way back in the 70s and 80s, when everyone was
working on parallel computing architectures to gang together many low
performance CPUs for faster execution.
A Parallel Comeback
Those efforts ground to a halt with unicore's success, but now, with innovations from oldsters who remember parallel, they're making a comeback. As soon as Storage Performance Council audits some results, I'll have a story to tell you about parallel I/O and the dramatic improvements in performance and cost that it brings to storage in virtual server environments. It's a real breakthrough, enabled by folks at DataCore who remember what we were working on in tech a couple of decades back.
Those efforts ground to a halt with unicore's success, but now, with innovations from oldsters who remember parallel, they're making a comeback. As soon as Storage Performance Council audits some results, I'll have a story to tell you about parallel I/O and the dramatic improvements in performance and cost that it brings to storage in virtual server environments. It's a real breakthrough, enabled by folks at DataCore who remember what we were working on in tech a couple of decades back.
No comments:
Post a Comment