Translate

Wednesday 29 June 2011

How Many Tiers Do You Want?

Two recent articles got me thinking about the current limitations of tiered storage, and where it’s going. Ellen Messmer at Network World reported on some comments by Gartner analyst Stanley Zaffos about the chaos in the public cloud storage market. Some vendors, such as Amazon and Nirvanix, are gung-ho on public cloud storage, while others, like Iron Mountain and EMC, have pulled out of the business. While the cost comparison looks more than favorable—Gartner’s estimates is about 75 cents to a dollar per gigabyte per month for in-house storage vs. as low as 3 cents for cloud storage—Zaffos rightly points out the many variables IT needs to consider, among them latency, limited bandwidth, and security. In his remarks, He called out a couple of alternatives to public cloud storage, including a hybrid approach mixing public and private storage—in effect, making public cloud storage just another tier.

At CTOEdge, Mike Vizard has been talking for some time about the rising importance of data management and storage systems to deal with a world in which “multiple tiers of storage are soon going to be the rule rather than the exception.” In a recent blog post, he discusses building a new tier of storage for what he calls “warm data,” which requires access speeds somewhere between production applications and everything else (e.g., backup, archive, DR). He noted IBM’s new Netezza High Capacity Appliance, which isn’t so much a storage tier as a packaged storage application that gives quicker access to petabyte-sized data warehouses to satisfy compliance requirements.

These are two extremes of the storage spectrum, and a good illustration of some of the drivers behind the evolution of tiered storage, which is all about matching networked storage capabilities to application and business needs (e.g., speed, capacity, availability, security). In an ideal world, every application would have exactly the right kind of storage (i.e., infinite tiering granularity). In the real world, differences among technologies, vendors, protocols, and the like make this impractical, and the actual granularity possible with tiered storage is very low, which means that each tier is a compromise.

Like Mike Vizard, I’m convinced that the evolution of storage is taking us in the direction of more and more tiers to deliver a better match to application requirements. We’re going to see low-cost cloud storage treated as just another tier, for services like archiving and long-term backup. We’re going to see increasingly capable private cloud solutions with an increasing variety of tiers within them. And we’re going to see the “fragmentation” of what today we call “managed services” into an increasingly granular menu of capabilities like high availability, disaster recovery, compliance support, and similar “tiers” of storage-based services.

Storage virtualization software, with its ability to abolish storage silos, is going to be a major part of this evolution. The key will be its adoption by more and more service providers like Host.net and External IT who will use it in their own data centers and their customers’ data centers to provide seamless access to storage that matches application and business needs at a very fine granularity, whether in-house or out in the cloud. The result will be a storage environment in which IT can basically dial in as many tiers as they need, without worrying about the cake collapsing on them.


Photo by Pink Cake Box.

No comments: