Translate

Thursday 29 November 2012

High Performance Storage Virtualization and Streamlining Virtualizing Tier 1 Apps

By Steve Houck

The magic to making tier 1 apps perform happens in the adaptive technology known as high-performance storage virtualization. http://virtualizationreview.com/articles/2012/11/14/vv-streamline-tier-1-apps.aspx?sc_lang=en

Back in 2010 when I was at VMware, I would have bet you money that within months, the virtualization movement was going to sweep up enterprise apps with the roar of an unabated forest fire. But it didn't.

What seemed like fait accompli at the time turned out to be far more elusive than any of us could have predicted. The naïve, invigorated by the thrill of consolidating dozens of test/development systems in a weekend, bumped hard against a tall, massive wall. On the vendor side, we fruitlessly threw more disk drives, sheet metal, and plumbing at it. The price climbed, but the wall ceded not.

Fast forward to late 2012. Many still nurse their wounds from those attempts, unwilling to make another run at the ramparts which keep Tier 1 apps on wasteful, isolated servers until someone they trust gets it done first. To this day, they put up with a good deal of ribbing from the wise systems gurus, who enjoy reminding us why business critical apps absolutely must have their own, dedicated machines.

The seasoned OLTP consultants offer a convincing argument. Stick more than one instance of a heavily loaded Oracle, SQL Server or SAP image on a shared machine and all hell breaks loose. You might as well toss out the secret book on tuning, because it just doesn't help.

To some degree, that's true, even though the operating systems and server hypervisors do a great job of emulating the bare metal. It's an I/O problem, Mr. Watson.

It's an I/O problem indeed, into and out of disks. Terms like I/O blending don't begin to hint at the complexity and chaos that such consolidation introduces. Insane collisions at breakneck speeds may be more descriptive. Twisted patterns of bursty reads queued up behind lengthy writes, overtaken by random skip-sequential tangents. This is simply not something one can manually tune for, no matter how carefully you separate recovery logs from database files.

That's before factoring in the added pandemonium when the shared array used by a DB cluster gets whacked by a careless construction worker, or a leaky pipe drips a little too much water on the redundant power supplies.

Enter the adaptive technology of high-performance storage virtualization. Whether by luck or design, the bedlam introduced when users collapse multiple enterprise apps onto clustered, virtualized servers, mirrors the macro behavior of large scale, discreet servers banging on scattered storage pools. The required juice to pull this off spans several crafts. A chunk of it involves large scale, distributed caching. Another slice comes from auto-sensing, auto-tuning and auto-tiering techniques capable of making priority decisions autonomically at the micro level. Mixed in the skillset is the mysterious art of fault-tolerant I/O re-direction across widely dispersed resources. You won't find many practitioners on LinkedIn proficient in cooking up this jambalaya. More importantly, you won't have to.

In the course of the past decade, this enigmatic mojo and the best practices that surround it have been progressively packaged into a convenient, shrink-wrapped software stack. To play off the similarities with its predecessors, the industry calls it a storage hypervisor.

But I stray. What owners of business critical apps need to know is that they can now confidently virtualize those enterprise apps without fear of slow erratic service levels, given, of course, that they employ a high-performance, fully redundant storage hypervisor to yield fast, predictable response from their newly consolidated environment. Instead of throwing expensive hardware at the problem, or giving up altogether, leave it to the intelligent software to manage the confluence of storage traffic that characterizes virtualized Tier 1 programs. The storage hypervisor cost-effectively resolves the contention for shared disks and the I/O collisions that had previously disappointed users. It takes great advantage of new storage technology like SSDs and Flash memories, balancing those investments with more conventional and lower-cost HDDs to strike the desired price/performance/capacity objectives.

The stuff shines in classic transactional ERP and OLAP settings, and beyond SQL databases does wonders for virtualized Exchange and SharePoint as well.

Sure, the advanced new software won't stop the veterans from showing off their scars while telling picturesque stories about how hard this was in the old days. Though, it will give the current pros in charge of enterprise apps something far more remarkable that they too can brag about -- without getting their bodies or egos injured on the way.

Wednesday 28 November 2012

Brennercom Adds DataCore Storage Hypervisor for Business Continuity and High-Performance Storage for their New VMware View Desktops and Cloud Services

Brennercom, an Italian-based Telecommunication Technology Company, has Extended its Redundant, High-availability Storage Infrastructure to Service its Private Cloud and Virtual Desktop Requirements.
http://www.it-director.com/technology/storage/news_release.php?rel=35287

DataCore Software today announced that information and communication technology (ICT) company Brennercom has attained a new level of business continuity and performance for its virtual desktop infrastructure (VDI) and cloud services using the DataCore SANsymphony™-V Storage Hypervisor.

By making use of the SANsymphony-V storage hypervisor, corporate data is centrally administrated on a variety of different hardware storage solutions and all of it is protected by DataCore’s synchronous mirroring capability. The virtualization software from DataCore required only a quarter of the investment that would have been needed for a hardware-based SAN to provide a stable, high performance VMware View VDI for 160 desktop platforms, in addition to supporting the storage needs for all of its virtual machines running on VMware vSphere.

"The virtualization and common central management provided by the DataCore storage hypervisor considerably reduced our IT division's work load. New systems can now be fully set up for users within 10 minutes, unlike the laborious, many hours and days set-up required to install physical server storage systems in the past. Fast provisioning and capacity expansions can now be easily implemented via the central console with a few mouse clicks in the event of an acute need for storage," explained Roberto Sartin, head of Technical Division at Brennercom.

Establishment of Virtual Desktop and Cloud Infrastructure
At Brennercom, internal IT services are provided by the IT Management Division. The Division decided to extend its existing virtual VMware server infrastructure and to establish a VDI and cloud services based on VMware. To handle the expansion, it was decided that a new approach to managing data storage was needed.

The primary drivers were two-fold: First, Brennercom needed to expand its external computer center to accommodate new cloud computing services. Central and efficient system administration was a core objective of this extension. Second, Brennercom needed greater high-availability due to the increased business continuity requirements that would result from its move to centralization. The project plan and investments also encompassed the need for a later partial move of a number of the systems to a second location in Trento (about 30 miles away) to ensure that the immediate, high-availability system could be backed up by a two-site disaster recovery model for the purposes of required ISO audits.

The company also had to consider the planned consolidation of its current, heterogeneous IT landscape at the Bozen site. While the central computer center services were based on a fiber channel infrastructure, some divisions were making use of iSCSI storage. Apart from vSphere virtual machines, Citrix XenServer was also used in some areas.

Cost-effective VDI with 160 desktops
The VDI with VMware View is based on an integrated system supporting virtualized storage and virtualized servers. The decision to use this platform was taken after the positive experience gained with the VMware hypervisor. The 160 desktops are successfully being migrated to the notebooks or thin clients of the field workers and the helpdesk, using the centralized infrastructure. The long-term benefits of VDI lie in the lower costs and the less cumbersome, centralized administration needed when it comes to the setup, updating and maintenance of these virtual desktops.

"On the storage side, the VDI and the virtual servers are supported by DataCore’s round-the-clock, failsafe storage infrastructure and performance has been enhanced by intelligent caching, fulfilling all our expectations," comments Sartin. "By using the DataCore storage hypervisor, we were able to integrate a technically complex solution with a universal range of services to meet the short-term performance and high-availability requirements of our VDI needs. In addition, the integrated migration and replication features have created the basis for efficiently implementing the planned model we need for disaster recovery."

Flexible Infrastructure for Cloud Services
The next large-scale project to be concluded by year-end 2012 is dividing and synchronizing the existing systems between the computer centers in Bolzano and Trento so that operations can continue at one location in the event of a catastrophe.

"As is the case in other industries, business continuity is an absolute necessity for us. By making use of the DataCore solution within the virtual infrastructure created by VMware vSphere and VDI, we cannot only ensure that we meet these corporate requirements, but also guarantee optimal cost efficiency as a result of the hardware independence of the solution. This affects both the direct investment and the indirect and long-term cost of refreshes, expansions and added hardware acquisitions. We have thus created the technical basis for our external IT services, and within this framework we are creating the most flexible and varied range of cloud services possible," concludes Brennercom CEO, Dr. Karl Manfredi.

To read more regarding this deployment, please access a complete case study concerning DataCore’s implementation at Brennercom: Brennercom SpA Case Study.

Tuesday 20 November 2012

The Red Cross Embraces DataCore Software's SANsymphony-V to Optimize Online Analytical Processing Performance

Storage Hypervisor boosts charity's data mining speed by 300 Percent

http://finance.yahoo.com/news/red-cross-embraces-datacore-softwares-120000196.html

DataCore Software today announced that the British Red Cross Society has deployed the SANsymphony™-V Storage Hypervisor to provide a significant performance acceleration on its new Online Analytical Processing (OLAP) system, dramatically shortening response times and increasing the reliability of data extraction. The performance improvements were achieved by installing SANsymphony-V on an HP Proliant DL 370 server. The DataCore™ software has reduced the time window needed to perform the Extract, Transform and Load (ETL) operations from an average of 12 hours down to four, with the load spread across half the original number of internal hard disk drives achieved through the efficiency of SANsymphony-V's thin provisioning.

The British Red Cross Society is the United Kingdom's registered charity arm of the worldwide humanitarian organization, the International Red Cross. Formed in 1870, the Red Cross has over 31,000 volunteers and 3,300 staff providing assistance and aid to all people in crisis, both in the UK and overseas, without discrimination and regardless of their ethnic origin, nationality or religion.

"In order to sustain the Charity's considerable ongoing work worldwide, the Red Cross needs to continually generate additional income from new and existing donors," said Kevin Bush, technical architect for the Charity's MIS Enterprise Architecture Team in London. "It is our function in MIS to ensure the relevant departmental units have the appropriate infrastructure available to allow them to complete automated processes in time to fulfil marketing campaigns to drive further donations."

To help facilitate ongoing fundraising, a new suite of hardware and business intelligence tools were deployed six months ago for the British Red Cross utilizing OLAP - an approach that swiftly answers multi-dimensional analytical queries through accurate Business Intelligence (BI) tools deployed on British Red Cross' SQL Server database. BI data marts are created to track behavioral changes, creating campaign relevancy trends for business units. This level of data profiling, specifying individual campaigns with matched targets, entails significant I/O (Input/Output) processing demands and depends on a stable, optimized infrastructure.

Working in conjunction with the MIS Enterprise Architecture Team, the British Red Cross's partner, Adapto, recommended that deploying DataCore's SANsymphony-V software would significantly decrease I/O strain and dramatically increase performance in a cost effective, non-invasive way. The SANsymphony-V storage hypervisor could dramatically improve performance levels by increasing the speed of read/write requests across the entire British Red Cross storage infrastructure using the storage server memory as the caching engine. This caching could dramatically accelerate application response times, manifesting in a dramatic increase in the speed of database queries and data extraction for the business units.

Critical to the effectiveness of the Extract/Transform and Load (ETL) from the database is achieving ongoing consistency within a predefined extraction window. The speed of I/O to process workloads determines these two factors; a slow I/O equates to a long and erratic extraction window. In practice, prior to the performance caching gains, each ETL was taking between nine and 15 hours, being set to run overnight with the resultant data marts ready in time for the next working day.

Following Adapto's suggestion, Kevin downloaded the easy to install SANsymphony-V test drive and right away ran a test ETL that displayed immediate benefits through DataCore's mega caching ability, with the software recognizing I/O patterns to anticipate which blocks to read next into RAM from the back-end disks. Requests became fulfilled quickly from memory at electronic speeds, eliminating the delay associated with the physical disk I/O. The findings were impressive. The production-ready, easy to use GUI allowed the ETL to perform at a blistering pace, similar to that achieved by SSD but without the associated cost overheads. This manifested in a shorter four hour query extraction timeframe.

"From the point of evaluation onwards, we haven't looked back with SANsymphony-V," said Bush. "It's caching and performance acceleration has certainly addressed the consistency of extraction, whilst reducing the window to an acceptable level, so that as a Charity, we can concentrate on effective fundraising to help those most in need. We are so impressed that we are now looking at installing another node of SANsymphony-V for high availability and mirroring."

Monday 19 November 2012

Storage hypervisor: Storage's future? It' the software that matters: Software-defined Storage; Storage Virtualization

By now you may have heard the term "storage hypervisor." You probably don't know exactly what it means, but that isn't your fault. Vendors that use the term to describe their products disagree on the exact meaning, although they mostly agree on why such a technology is useful.


A vendor panel at the Storage Networking World (SNW) show in Santa Clara, Calif., last month set out to define storage hypervisor. The represented vendors sell different types of products, though. The panel included array-based virtualization vendor Hitachi Data Systems Corp., network-based storage virtualization vendor IBM, software SAN virtualization vendor DataCore Software Corp. and virtual machine storage management vendor Virsto Software Corp.

 Can all of these vendors' products be storage hypervisors? It's more accurate to say that, taken together, the storage hypervisor products make up an overview of storage virtualization under a new name. And that new name is already giving way to a newer term. "Software-defined storage" was used interchangeably with "storage hypervisor" during the SNW panel.

 Software-defined storage is no better defined than storage hypervisor, but it includes the "software-defined" phrase taking over the data center and networking these days.

 DataCore Software Corp. CEO George Teixeira said his company was ahead of the current trend when it started back in the 20th century with the premise that software gives storage its value.

"Today we have fancy terms for it like software-defined storage, but we started DataCore in 1998 with a very basic [PowerPoint] slide that said, 'It's the software that matters, stupid,'" Teixiera said. "And we've seen storage from the standpoint of really being a software design."

 Teixiera said any talk of a storage hypervisor must focus on software.

"Can you download it and run it? And beyond that, it should allow users to solve a huge economic problem because the hardware is interchangeable underneath," he said. "Storage is no longer mechanical drives. Storage is also located in flash. Your architecture can incorporate all the latest changes, whether it's flash memory or new kinds of storage devices. When you have software defining it, you really don't care.

 "Just like with VMware today," said Teixiera, you really don't care whether it's Intel, HP, Dell or IBM servers underneath. Why should you care about the underlying storage?"

 Read more at: http://searchvirtualstorage.techtarget.com/Storage-hypervisor-Hypothetical-or-storages-future

Friday 16 November 2012

See the latest In-depth Product Reviews on SANsymphony-V

Most recent: SANsymphony-V R9.0 Product Review by NT4ADMINS Magazine

Greater scalability, improved administration functions and close integration with vSphere environments and system management suites are the main core characteristics of Release 9 of the SANsymphony™-V storage hypervisor. Above all, the 'group operations' make life easier for the administrator:

Wednesday 14 November 2012

Set your data free with Dell Fluid Data™ and DataCore SANsymphony-V

When businesses change, whether in response to a new opportunity or a competitive challenge, the applications and the data they depend on have to change too. That can be really hard with legacy storage solutions, whose rigid boundaries tend to hold data captive. This is especially true if, as is often the case, the storage infrastructure has been built up over time out of various “point solutions.” This creates inefficient data silos that make it hard to optimize the match between storage capabilities and application needs or take advantage of new hardware capabilities. Availability and disaster recovery capabilities can suffer as well.

The Fluid Data™ architecture from Dell is designed to overcome these storage challenges by making data as dynamic as the businesses that depend upon it. DataCore is a long-time Dell ISV partner, and we’ve been working with our reseller partners around the world to help our customers realize the benefits of Fluid Data. “We have thousands of DataCore storage hypervisor customers using Dell storage platforms,” says Carlos Carreras, DataCore’s Vice President of Alliances & Business Development. “We see many DataCore partners like The Mirazon Group and Sanity Solutions penetrating non-Dell accounts and leveraging SANsymphony-V to make it easier for customers to meet their storage needs with Dell solutions.”

The DataCore SANsymphony-V storage hypervisor lets Dell resellers seamlessly harness the Dell Fluid Data architecture and its wide range of products to address the storage appetite of their customers, including platforms such as Compellent, EqualLogic, and the PowerVault MD Series. Customers can add these cost-effective Dell solutions to their storage portfolio without a forklift upgrade, preserving their storage investments and prolonging the useful life of existing storage (e.g., moving it down-tier) while leveraging the power of Fluid Data for increased storage efficiency and performance. The DataCore storage hypervisor and enterprise-wide auto-tiering makes it easy to penetrate and refresh existing storage installations and add new Dell storage to modernize the infrastructure and lower overall costs. With the SANsymphony Cloud Gateway, customers can even add popular public cloud hosting services as a low-cost tier in their storage strategy. With DataCore and Dell, customers get infrastructure-wide storage management and the compelling benefits of Fluid Data across all their storage investments.

For DataCore partners, the new DataCore SANsymphony-V Migration Suite makes it easy to introduce new customers to Dell storage with completely non-disruptive data migration. The suite enables a DataCore partner to set up a temporary dual-node SANsymphony-V installation that can turn a hardware refresh into a zero-impact process. A pass-through architecture assures that the customer’s environment remains “hot” the entire time. Users never even know a migration has taken place.

“In customer meetings, I am often met with skepticism that there is no way to do an easy migration without a lot of disruption. After they see the power of DataCore storage virtualization software in action, their jaws literally drop because they cannot believe that it can be that simple to migrate their storage and VMs,” said Barry Martin, partner and chief technology officer at The Mirazon Group.

Barry also notes that the heat map recently introduced in SANsymphony-V 9.0 is an especially powerful analytical tool. “While the migration suite is in place, you could show the customer all their storage I/O ‘hot spots,’ and where, for instance, a SSD tier could boost the performance of critical applications. Being able to give that kind of strategic advice is key to our business success, and the visual impact makes it all the more powerful.”

These and other features make SANsymphony-V a natural complement to the Dell Fluid Data architecture. You can start learning more about SANsymphony-V here, or check out case studies in a variety of industries and applications to see how the DataCore storage hypervisor can go to work for you.

SEE DATACORE SOFTWARE AT THIS UPCOMING DELL EVENT

Dell Storage Forum Paris 2012

DataCore will be a Petabyte Sponsor at the upcoming Dell Storage Forum.

The event takes place 14-16 November 2012 in Paris. Address follows:

Marriott Rive Gauche Hotel & Conference Center
17 Boulevard Saint Jacques
Paris, 75014
France

Description
This is a channel partner and an end-user focused event. DataCore will present the newest version of its storage hypervisor – SANsymphony-V 9.0.

For more information on the show, visit Dell Storage Forum Paris 2012.

Monday 12 November 2012

SC12 Supercomputing Conference: Storage technology leaders Fusion-io and DataCore Software team up to showcase new joint solution for data-intensive, HPC applications

DataCore Software Featured in Fusion-io Booth #2012 at SC12 Conference

DataCore Software, the storage hypervisor leader and premier provider of storage virtualization software, invites attendees of SC12, the international conference for high performance computing (HPC), networking, storage and analysis, to explore innovative new ways to take advantage of Fusion-io flash memory technologies in data-intensive environments. DataCore will be exhibiting in Fusion-io booth #2201 at the Salt Palace Convention Center in Salt Lake City, Utah, November 12-15, 2012.

DataCore will showcase its SANsymphony™-V storage hypervisor integrated with the Fusion ioMemory platform to meet the large scale, low-latency needs of HPC applications common to many SC12 visitors. Attendees will learn how DataCore applies state-of-the-art auto-tiering technology to dynamically distribute I/O workloads between blazing fast Fusion-io flash memory and conventional high-density disk farms for an optimal price/performance balance.

Experts will also be on hand to give advice on how to eliminate crippling single points of failure by using the SANsymphony-V software to mirror data between redundant, multi-tier storage pools.

Fusion-io products are well known for accelerating databases, cloud computing, big data and HPC applications in a variety of industries, including e-commerce, social media, finance, government and telecommunications. Combined with DataCore’s™ storage hypervisor, customers not only enjoy higher performance and availability, but also superior flexibility and exceptional value from their storage investments.

“High performance computing requires applications to process data at speeds that transform data into discovery,” said Tyler Smith, Fusion-io vice president of alliances. “Like other data-driven webscale and enterprise organizations, HPC innovators are also cost-conscious and mindful of data protection. Our collaboration with DataCore Software provides a powerful integrated solution that ensures data and applications are available and ready to efficiently deliver peak performance.”

“SC12 provides a fantastic backdrop to convey the joint value resulting from DataCore’s long-standing relationship with Fusion-io. We are seeing great results with many customers leveraging our combined hardware and software capabilities as the centerpiece for their most demanding workloads,” adds Carlos Carreras, vice president of alliances and business development
at DataCore Software.

SC12 is the premier international conference for high-performance computing, networking, storage and analysis. The conference is expecting 10,000 attendees representing more than 50 countries and 366 exhibitors. Exhibits and technical presentations at SC12 will offer a look at the state-of-the art solutions in high performance computing and a glimpse of the future.
  • What: DataCore and Fusion-io demonstrations at SC12 Conference
  • Where: Booth 2201, Salt Palace Convention Center, Salt Lake City, Utah
  • When: November 12-15, 2012 

Network Computing Review: DataCore's Storage Hypervisor - An Overview & Customer Use Cases –New Release Features

Network Computing: DataCore's Storage Hypervisor - An Overview –Part 1: New Release Features

By David Hill, David Hill is an IT Author and Chief Analyst and Principal of Mesabi Group LLC. DataCore Software is not a client of David Hill and the Mesabi Group.

A storage hypervisor is an emerging term used by some vendors to describe their approach to storage virtualization. Several companies offer storage hypervisors, including IBM, Virsto and DataCore. I've already written about IBM and Virsto in previous blogs.

Now it's DataCore's turn. DataCore is an independent software vendor (ISV), so it has no financial interest in selling the underlying storage hardware. It supports both virtualized servers and traditional physical hosts and legacy storage with the same feature stack and automation. DataCore's storage hypervisor is a software product called the SANsymphony-V. This blog will examine some enhanced and new features of the version 9 release.

Auto-tiering Auto-tiering is a "hot" topic (pun intended!) with not only tier-0 solid state devices, but also performance (SAS or FC) hard disk drives, capacity (SATA), and archived storage that can even be rented from public cloud providers at a distance. This feature also includes automatic tuning that creates heat maps to reveal heavy disk activity, so that the hottest data gets the most attention (in order to meet performance service level requirements). It also automates load balancing across the available disk resources.


Network Computing: DataCore's Storage Hypervisor - An Overview –Part 2: Two Customer Use Cases

Host.net is a service provider that offers VM and enterprise storage platforms in multiple virtual private data centers (i.e., Host.net hosts customer compute and storage resources at its data centers) that are all connected to a Cisco-based10Gbps multinational backbone. Among the many services the company offers are virtual enterprise servers, storage, backup/restore, disaster recovery and colocation.

DataCore is at the heart of Host.net's enterprise SAN storage platform. Host.net believes DataCore offers the necessary performance and data integrity (every byte of data is written twice within a synchronous mirror) at a competitive price. Among the things Host.net likes about DataCore are hardware independence (for example, in a SAN hardware refresh it can add and migrate data on the fly with no downtime), operating system independence and robust I/O performance, as DataCore's use of hundreds of gigabytes of high-speed cache essentially turns a traditional SAN into a high-speed hybrid solid-state SAN at a fraction of the cost.

X-IO (formerly Xiotech) builds hardware with its Hyper ISE (Intelligent Storage Elements) storage system. With a great deal of engineering experience and innovation, the goal is to deliver high performance to accelerate enterprise applications at good price/performance level. However, X-IO has decided to shed itself of the storage and data management software (such as snapshot and replication software) that typically characterizes enterprise-class storage.

But customers still need storage and data management software. DataCore comes provides those capabilities in X-IO products. As a result, X-IO can take a hardware-intensive focus and improve price/performance while DataCore picks up the slack.

Storage virtualization solutions help make the most of virtualization

Good recent article worth to share: GARTNER analyst recently spoke on why Storage Virtualization solutions make the most of virtualization, SSDs, Auto-tiering http://searchvirtualstorage.techtarget.com/news/2240169260/Storage-virtualization-solutions-help-make-the-most-of-virtualization

...Server virtualization allows much higher rates of system usage, but the resulting increases in network traffic pose significant challenges for enterprise storage. The simple "single server, single network port" paradigm has largely been displaced by servers running multiple workloads and using numerous network ports for communication, resiliency and storage traffic.
Virtual workloads are also stressing storage for tasks, including desktop instances, backups, disaster recovery (DR), and test and development.
At Gartner Symposium/ITxpo recently, Stanley Zaffos, a Gartner research vice president, outlined the implications of server virtualization on storage and explained how storage virtualization solutions, the right approach, and the proper tool set can help organizations mitigate the impact on enterprise storage.
Consider using storage virtualization. Gartner's Zaffos urges organizations to deploy storage virtualization as a means of better storage practice, and he underscores core benefits of the technology:
  • Storage virtualization supports storage consolidation/pooling, allowing all storage to be "seen" and treated as a single resource. This avoids orphaned storage, improves storage utilization and mitigates storage costs by reducing the need for new storage purchases. The benefits of storage consolidation increase with the amount of storage being managed.
  • Storage virtualization supports agile and thin provisioning, allowing organizations to create larger logical storage areas than the actual disk space allocated. This also reduces storage costs because a business does not need to purchase all of the physical storage up front -- simply add more storage as the allocated space fills up. Later tools may allow dynamic provisioning where the logical volume size can be scaled up or down on demand. Management and capacity planning is important here.
  • Storage virtualization supports quality of service (QoS) features that enhance storage functions. For example, auto-tiering can automatically move data from faster and more expensive storage to slower and less expensive storage (and back) based on access patterns. Another feature is prioritization, where some data is given I/O priority over other data.
Consider using solid-state drives (SSDs). One of the gating issues for storage is the lag time caused by mechanical delays that are unavoidable in conventional hard-disk technologies. This limits storage performance, and the effects are exacerbated for virtual infrastructures where I/O streams are randomly mixed together and funneled across the network to the storage array, creating lots of disk activity. Storage architects often opt to create large disk groups. By including many spindles in the same group, the mechanical delays are effectively spread out and minimized because one disk is writing/reading a portion of the data while other disks are seeking. Zaffos points to SSDs as a means of reducing spindle count and supplying much higher IOPS for storage tasks.
Plan the move to virtualization carefully. Data center architects must develop a vision of their infrastructure and operation as they embrace virtualization. Zaffos suggested IT professionals start by identifying and quantifying the impact server virtualization, data growth and the need for 24/7 operation will have on the storage infrastructure and services.
Next, determine what you actually need to accomplish and align storage services with the operational abilities and physical infrastructure. For example, if you need to emphasize backup/restoration capabilities, support data analytics, or handle desktop virtualization, it's important to be sure that the infrastructure can support those needs. If not, you may need to upgrade or make architectural changes to support those capabilities.
When making decisions for virtualization, Zaffos notes the difference between strategic and tactical issues. Strategic decisions create lock-in, and tactical decisions yield short-term benefits. For example, the move to thin provisioning is a tactical decision, but the choice to use replication like SRDF would be a strategic decision.
...Ultimately, Zaffos notes that storage virtualization solutions can be a key enabling technology for server and desktop virtualization -- both of which place extreme demands on the storage infrastructure. But, he said, the move to storage virtualization takes a thorough understanding of the benefits, careful planning to ensure proper alignment with business and technical needs, and judicious use of storage technologies like tiers and SSD.