Article from Virtualization Review By Augie Gonzalez: The Need for SSD Speed, Tempered by Virtualization Budgets
What IT department doesn't lust for the hottest new SSDs? You can just savor the look of amazement on users' faces when you amp up their systems with these solid state memory devices. Once-sluggish virtualized apps now peg the speed dial.
Then you wake up. The blissful dream is interrupted by the quarterly budget meeting. Your well-substantiated request to buy several terabytes of server-side flash is “postponed" -- that's finance's code-word for REJECTED. Reading between the lines, they're saying, “No way we're spending that kind of money on more hardware any time soon.”
Ironically, the same financial reviewers recommend fast-tracking additional server virtualization initiatives to further reduce the number of physical machines in the data center. Seems they didn't hear the first part of your SSD argument. Server consolidation slows down mission critical apps like SQL Server, Oracle, Exchange and SAP, to the point where they are unacceptable. Flash memories can buy back quickness.
This not-so-fictional scenario plays out more frequently than you might guess. According to a recent survey of 477 IT professionals conducted by DataCore Software, it boils down to one key concern: storage-related cost. Here are some other findings:
- Cost considerations are preventing organizations from adopting flash memory and SSDs in their virtualization roll-outs. More than half of respondents (50.2 percent) said they are not planning to use flash/SSD for their virtualization projects due to cost.
- Storage-related costs and performance issues are the two most significant barriers preventing respondents from virtualizing more of their workloads. 43 percent said that increasing storage-related costs were a “serious obstacle” or “somewhat of an obstacle. 42 percent of respondents said the same about performance degradation or inability to meet performance expectations.
- When asked about what classes of storage they are using across their environments, nearly six in ten respondents (59 percent) said they aren't using flash/SSD at all, and another two in ten (21 percent) said they rely on flash/SSD for just 5 percent of their total storage capacity.
Rather than indiscriminately stuff servers full of flash, I'd suggest using software to share fewer flash cards across multiple servers in blended pools of storage. By blended I mean a small percentage of flash/SSD alongside your current mix of high-performance disks and bulk storage. An effective example of this uses hardware- and manufacturer-agnostic storage virtualization techniques packaged in portable software to dynamically direct workloads to the proper class (or tier) of storage. The auto-tiering intelligence constantly optimizes the price/performance yield from the balanced storage pool. It also thin provisions capacity so valuable flash space doesn't get gobbled up by hoarder apps.
The dynamic data management scheme gets one additional turbo boost. The speed-up comes from advanced caching algorithms in the software, for both storing and retrieving disk/flash blocks. In addition to cutting input/output latencies in half (or better), you'll get back tons of space previously wasted on short-stroking hard disk drives (HDDs). In essence, you no longer need to overprovision disk spindles trying to accelerate database performance, nor do you need to overspend on SSDs.
Of course, there are other ways for servers to share solid state memories. Hybrid arrays, for example, combine some flash with HDDs to achieve a smaller price tag. There are several important differences between buying these specialized arrays and virtualizing your existing storage infrastructure to take advantage of flash technologies, not the least of which is how much of your current assets can be leveraged. You could propose to rip out and replace your current disk farm with a bank of hybrid array, but how likely is that to get the OK?
Instead, device-independent storage virtualization software uniformly brings auto-tiering, thin provisioning, pooling, advanced read/write caching and several replication/snapshot services to any flash/SSD/HDD device already on your floor. For that matter, it covers the latest gear you'll be craving next year including any hybrid systems you may be considering. The software gets the best use of available hardware resources at the least cost, regardless of who manufactured it or what model you've chosen.
Virtualizing your storage also complies with the greater corporate mandate to further virtualize your data center. It's one of the unusual times when being extra frugal pays extra big dividends.
No doubt you've been bombarded by the term software-defined data center, and more recently, software-defined storage. It has become a pseudonym for storage virtualization, with a broader, infrastructure-wide scope; one spanning the many disk and flash-based technologies, as well as the many nuances which differentiate various models and brands. Without getting lost in too many semantics, it boils down to using smart software to get your arms around and derive greater value from all the storage hardware assets at your disposal.
Sounds like a very reasonable approach given how quickly disk technologies and packaging turn over in the storage industry.
One word of caution: Any software-defined storage inextricably tied to a piece of hardware may not be what the label advertises.
Yes, the need for speed is very real, but so is the reality of tight funding. Others have successfully surmounted your same predicament following the guidance above. It will surely ease the injection of flash/SSD technologies into your current environment, even under the toughest of scrutiny. At the same time, you'll strike the difficult but necessary balance between your business requirements for fast virtualized apps and your very real budget constraints; without question, a most desirable outcome.
About the Author
Augie Gonzalez is director of product marketing for storage virtualization software developer DataCore Software.