Tuesday 7 February 2012

Did Virtualization Begat a Monster?

As with all disruptive technology changes, virtualization brought tremendous productivity and cost savings gains, but its widespread adoption has also created a new challenge: managing what some have called the “Virtual Machine Sprawl Monster.” The ease of provisioning has increased dramatically the number of virtual machines deployed. As a consequence, even greater performance, availability and administrative management is required to support these sprawling virtual environments. These requirements will only grow as systems (both virtual and physical), platforms and applications continue to proliferate, as they most surely will.

Fundamentally, the main challenge facing IT managers is: “How can I manage and control all this complexity with a tighter budget and fewer skilled resources?” The answer is straight forward: with very smart software, a new mindset focused on architecture versus devices, and the alignment of people and processes to this new reality.

Is the real bottleneck for an IT manager the time required to get things done or the growing infrastructural chaos?

Both the lack of time and the complexity of managing dynamic infrastructures have made IT manager’s job more difficult. Therefore, the IT manager position must evolve from technician to architect. Too many of the current tasks that IT manager must perform or oversee are platform or vendor specific or tied to purpose-built hardware devices and, therefore, require specialized training or tools to properly use them. Also legacy systems and new models don’t work well together.

Instead of simply addressing IT as a set of discreet technologies and platforms, the IT manager must create an environment in which hardware components become pooled and interchangeable and can be managed as a whole. This higher-level viewpoint is needed to cost-effectively meet the demanding and dynamic requirements for more performance, higher availability and greater productivity. For this to succeed, smart management software that works infrastructure-wide and new levels of automation are necessities. Automation is one of the things software can do very well.

The good news is that the smart software required, such as storage hypervisors, are now available. They are easy to use, enable hardware interchangeability, automate difficult repetitive tasks and manage resources as pools.

Also, to be cost-effective today, hardware must become an interchangeable component, so that you can go to the open market and get the best price for the hardware you need. Products like VMware and Hyper-V have already had a major impact in on server hardware selection and solutions like DataCore’s storage hypervisor will do the same for storage devices.

Software is where the intelligence lies and it is the key to better management. DataCore’s storage hypervisor, for instance, offers a comprehensive architecture to control the four main challenges of storage management: meet performance needs, ensure data protection and disaster recovery, cost-effectively pool and tier storage resources and optimize the utilization of infrastructure-wide storage capacity.

How does DataCore address these needs?

The simple answer is we do this by providing an architecture to manage storage – a storage hypervisor. DataCore is smart, easy to use software that delivers powerful automation and hardware interchangeability. It embraces and controls both the virtual and physical worlds and extends the capabilities of both existing and new storage assets.

DataCore’s storage hypervisor pools, auto-tiers and virtualizes existing and new storage devices including the latest high-performance and premium priced memory-based storage technologies (Flash Memory/SSDs) and cost-effective gateways to remotely located Cloud Storage. It provides an architecture that manages the many storage price/performance trade-offs while providing a best fit of storage resources to meet the dynamic workloads and applications of the real world. From a performance standpoint, caching software and self-learning algorithms are in place to boost performance often doubling or tripling response times. Auto-failover and failback software provides the highest levels of availability and continuous access to storage. Auto-tiering manages the I/O traffic to ensure that data is in the right place (SSD, fast storage arrays, capacity storage, Cloud storage) to get the best performance at the lowest possible cost. Automated thin provisioning makes it simple and quick to service application system disk needs and fully optimize the overall utilization of storage capacity.

From a people and process standpoint, DataCore’s storage hypervisor provides a simple and common way to manage, pool, migrate and tier all storage resources. The software accelerates performance and automates many time-consuming tasks including data protection and disk space provisioning.

No comments: