Translate

Tuesday, 8 March 2011

A Virtual Reality Check by Jon Toigo


http://www.drunkendata.com/?p=3299

Hate to say it, but I told you so. According to several surveys of businesses that have crossed my transom over the past few weeks, it would appear that a disproportionate number of server virtualization projects have stalled when they were less than 20% complete. The problem is simple and consistent: filled with the hype of the hypervisor vendors, planners leapt into the virtual abyss before they looked at the real costs associated with the strategy.

We were all told that all that this virtualization thing was going to cost was a basic hypervisor software license – a cost readily offset by reductions in the numbers of physical servers and their associated labor and energy expense. Then, it became evident that we needed the other licenses – those providing all of the cool features that were described in the brochure but not included on the Dell, HP or other server we bought. Then, we needed to get our staff trained as “vSphere Engineers” or something – an additional expense. Then, we discovered that consolidating all of those guests onto fewer servers required significant changes to our network cable infrastructure…and, ultimately, to our storage infrastructure.

Storage is the big part of the cost iceberg, submerged beneath the waterline of our virtualization vision, out of view and ready to sink our most unsinkable project plans. When the costs to reinvent storage became evident, especially the requirement to “forklift upgrade” our fibre channel fabrics, the captain called a “full stop” to our virtualization journey. In the aftermath, there was much hand-wringing and gnashing of teeth as the reality of virtualization collided with the business case.

The story probably won’t stop here, of course. Most firms are hiring in the consultants and pouring over their budgets to determine what can be salvaged to realize the gains of server – and later desktop – virtualization without becoming the next James Cameron epic in the process.

One solution worth considering carefully comes in the form of storage virtualization technology recently announced by DataCore Software: a product by the unassuming name of SANsymphony-V. DataCore already enjoys a broad install base in Europe, where firms seemed to cotton to the ideas of abstracting software from array controllers as a cost-containment measure years before the idea caught on in the US market. SANsympony-V, however, is more than a new version of a DataCore product – it is a comprehensive reworking of both the underlying platform and the presentation layer that has succeeded in elevating the product from “nice-to-have” to “must-have” status, especially for companies confronting the big stall in their server and desktop virtualization projects.

The original case for storage virtualization remains just as valid and compelling as ever. SANsymphony-V enables you to pool your storage rigs then serve up volumes to any application that requires them. The interface for allocating volumes is much improved. Wizard-driven and resembling more than anything else the latest Microsoft Office software GUI, you discover your storage and servers, then simply drag volume icons onto servers (or guest machines) to establish the connection between them. Since all I/O is serviced from a cache memory layer, storage response is 3 to 5 times faster. Plus, I/O paths are remembered by SANsymphony-V and traffic is load-balanced across available connections automatically.

Plus, thin provisioning (a concept invented by DataCore) is provided to ease the burden of capacity management. Logical volumes reflect maximum available capacity (or any capacity you choose), but actual resources aren’t provided from the storage pool until they are needed. That’s across all disk in the pool, not just a stand of spindles attached to a proprietary array controller featuring thin provisioning software. Big difference that, both in terms of efficacy and cost.

Data protection services are also universal. If you want continuous data protection in the form of write logging, to protect against a data corruption event, simply tick a box next to the virtual volume you have allocated. You have a granular log of writes that you can rewind to before a corruption event has occurred to back out any bad data.

And, of course, DataCore continues to provide synchronous mirroring and snap shots to protect against localized equipment or facility disasters. The key difference is that their mirrors and snaps can be done amongst and between any hardware regardless of brand! For regional disasters, SANsymphony-V provides a robust asynchronous replication capability over distance – with full failover and fail back capabilities.

DataCore also rescues stalled virtualization projects by delivering the means to synchronously replicate guest machine data behind any host server, which solves the problem of hypervisor clustering that shares nothing…except storage. And you don’t need to wait for your storage vendor to getting around to supporting the new APIs from the hypervisor vendor, including vStorage APIs for Array Integration (VAAI). The primitives for offloading functions like replication are built in to the volumes your provision via SANsymphony-V.

Bottom line: the big problem stalling server virtualization projects is storage. The fix is SANsymphony-V. Check it out.

1 comment:

Unknown said...

Where did "3-5 times faster" number came from? If cache memory is ~100 times faster compared to spindles... Just interesting :)

Ichiro Arai

Proud member of iSCSI SAN community.