Wednesday 30 April 2014

ZDnet: Snapshot Analysis on Software-defined Storage, DataCore style

By Dan Kusnetzky for Virtually Speaking --Read the full ZDNET article at:

Summary: DataCore has released SANsymphony 10 to further expand what its virtual SAN software can do. Call it what you want: storage virtualization, software-defined storage -- or simply fast, reliable and easy.

George Teixeira, CEO of DataCore, and his colleague Augie Gonzalez, director of product marketing, stopped by to discuss how the company has done since our last chat and to introduce version 10 of SANsymphony.

They believe that this is the best software for flash storage, DRAM storage and all fashion of rotating media…

Snapshot analysis

I've spoken with a number of customers who use SANsymphony on a daily basis. If I told you exactly what they've said, you would believe this post was really a marketing message from DataCore in disguise. I often hear stories of huge performance improvements, significant improvements in storage utilization and much better storage reliability.

Although I'm sure that DataCore's competitors would disagree, Teixeira asserts that SANsymphony is "the ultimate storage stack that simply accelerates everything."

He also asserts that all DataCore needs to do is get people to try it and they're hooked. Although I'm usually very skeptical of such claims, after talking with DataCore's customers, I'm beginning to believe it is true.

There are a large number of suppliers in the storage and storage virtualization market. Players such as IBM, EMC, NetApp, HP, HDS and a growing list of other competitors offer products that appear similar. DataCore has to get ahead all of these large and small suppliers to get the ear of IT decision makers. If the company is able to break through all of the noise, the strengths of their product are likely to be convincing.


DataCore Announces SANsymphony-V10, Enterprise-Class Virtual SANs and Flash-Optimising Storage Software Stack

 Next Generation SANsymphony-V10 Software-Defined Storage Platform

Scales Virtual SANs to More Than 50 Million IOPS and to 32 Petabytes of Pooled Capacity Surpassing Leading Competitors; End-to-End Storage Services Reconciles Virtual SANs, Converged Appliances, Flash Devices, Physical SANs, Networked and Cloud Storage From Becoming ‘Isolated Storage Islands’

Amidst the ever growing demand for enterprise-grade virtual SANs and the need for cost-effective utilisation of Flash technology, DataCore, a leader in software-defined storage, today revealed new virtual SAN functionality and significant enhancements to its SANsymphony™-V10 software – the 10th generation release of its comprehensive storage services platform. The new release significantly advances virtual SAN capabilities designed to achieve the fastest performance, highest availability and optimal use from Flash and disk storage directly attached to application hosts and clustered servers in virtual (server-side) SAN customer cases.

DataCore’s new Virtual SAN is a software-only solution that automates and simplifies storage management and provisioning while delivering enterprise-class functionality, automated recovery and significantly faster performance. It is easy to set up and runs on new or existing x86 servers where it creates a shared storage pool out of the internal Flash and disk storage resources available to that server. This means the DataCore™ Virtual SAN can be cost-effectively deployed as an overlay, without the need to make major investments in new hardware or complex SAN gear.

DataCore contrasts its enterprise-class virtual SAN offering with competing products which are incapable of sustaining serious workloads and providing a growth path to physical SAN assets, and inextricably tied to a specific server hypervisor, rendering them unusable in all but the smallest branch office environments or non-critical test and development scenarios.

The Ultimate Virtual SAN: Inexhaustible Performance, Continuous Availability, Large Scale
There is no compromise on performance, availability and scaling with DataCore. The new SANsymphony-V10 virtual SAN software scales performance to more than 50 Million IOPS and to 32 Petabytes of capacity across a cluster of 32 servers, making it one of the most powerful and scalable systems in the marketplace.

Enterprise-class availability comes standard with a DataCore virtual SAN; the software includes automated failover and failback recovery, and is able to span a N+1 grid (up to 32 nodes) stretching over metro-wide distances. With a DataCore virtual SAN, business continuity, remote site replication and data protection are simple and no hassle to implement, and best of all, once set, it is automatic thereafter.

DataCore SANsymphony-V10 also resolves mixed combinations of virtual and physical SANs and accounts for the likelihood that a virtual SAN may extend out into an external SAN – as the need for centralised storage services and hardware consolidation efficiencies are required initially or considered in later stages of the project. DataCore stands apart from the competition in that it can run on the server-side as a virtual SAN; it can run and manage physical SANs and it can operate and federate across both. SANsymphony-V10 essentially provides a comprehensive growth path that amplifies the scope of the virtual SAN to non-disruptively incorporate external storage as part of an overall architecture.

A Compelling Solution for Expanding EnterprisesWhile larger environments will be drawn by SANsymphony-V10’s impressive specs, many customers have relatively modest requirements for their first virtual SAN. Typically they are looking to cost-effectively deploy fast ‘in memory’ technologies to speed up critical business applications, add resiliency and grow to integrate multiple systems over multiple sites, but have to live within limited commodity equipment budgets.

“We enable clients to get started with a high performance, stretchable and scalable virtual SAN at an appealing price, that takes full advantage of inexpensive servers and their internal drives,” said Paul Murphy, vice president of worldwide marketing at DataCore. “Competing alternatives mandate many clustered servers and require add-on flash cards to achieve a fraction of what DataCore delivers.”
DataCore virtual SANs are ideal solutions for clustered servers, VDI desktop deployments, remote disaster recovery and multi-site virtual server projects, as well as those demanding database and business application workloads running on server platforms. The software enables companies to create large scale and modular ‘Google-like’ infrastructures that leverage heterogeneous and commodity storage, servers and low-cost networking to transform them into enterprise-grade production architectures.

Virtual SANs and Flash: Comprehensive Software Stack is a ‘Must Have’ for Any Flash Deployment
SANsymphony-V10 delivers the industry’s most comprehensive set of features and services to manage, integrate and optimise Flash-based technology as part of your virtual SAN deployment or within an overall storage infrastructure. For example, SANsymphony-V10 self-tunes Flash and minimises flash wear, and enables flash to be mirrored for high-availability even to non-Flash based devices for cost reduction. The software employs adaptive 'in-memory' caching technologies to speed up application workloads and optimise write traffic performance to complement Flash read performance. DataCore’s powerful auto-tiering feature works across different vendor platforms optimising the use of new and existing investments of Flash and storage devices (up to 15 tiers). Other features such as metro-wide mirroring, snapshots and auto-recovery apply to the mix of Flash and disk devices equally well, enabling greater productivity, flexibility and cost-efficiency.
DataCore’s Universal End-to-End Services Platform Unifies ‘Isolated Storage Islands’
SANsymphony-V10 also continues to advance larger scale storage infrastructure management capabilities, cross-device automation and the capability to unify and federate ‘isolated storage islands.’

 “It’s easy to see how IT organisations responding to specific projects could find themselves with several disjointed software stacks – one for virtual SANs for each server hypervisor and another set of stacks from each of their flash suppliers, which further complicates the handful of embedded stacks in each of their SAN arrays,” said IDC’s consulting director for storage, Nick Sundby. “DataCore treats each of these scenarios as use cases under its one, unifying software-defined storage platform, aiming to drive management and functional convergence across the enterprise.”

Additional Highlighted Features
The spotlight on SANsymphony-V10 is clearly on the new virtual SAN capabilities, and the new licensing and pricing choices. However, a number of other major performance and scalability enhancements appear in this version as well:

  • Scalability has doubled from 16 to 32 nodes; Enables Metro-wide N+1 grid data protection
  • Supports high-speed 40/56 GigE iSCSI; 16Gbps Fibre Channel; iSCSI Target NIC teaming
  • Performance visualisation/Heat Map tools add insight into the behavior of Flash and disks
  • New auto-tiering settings optimise expensive resources (e.g., flash cards) in a pool
  • Intelligent disk rebalancing, dynamically redistributes load across available devices within a tier
  • Automated CPU load leveling and Flash optimisations to increase performance
  • Disk pool optimization and self-healing storage; Disk contents are automatically restored across the remaining storage in the pool; Enhancements to easily select and prioritise order of recovery
  • New self-tuning caching algorithms and optimisations for flash cards and SSDs
  • ‘Click-simple’ configuration wizards to rapidly set up different use cases (Virtual SAN; High-Availability SANs; NAS File Shares; etc.)
Pricing and Availability
Typical multi-node SANsymphony-V10 software licenses start in the 8.000 to 20.000 euro range. The new Virtual SAN pricing starts at 3.300 euros per server. The virtual SAN price includes auto-tiering, adaptive read/ write caching from DRAM, storage pooling, metro-wide synchronous mirroring, thin provisioning and snapshots. The software supports all the popular operating systems hosted on VMware ESX and Microsoft Hyper-V environments. Simple ‘Plug-ins’ for both VMware vSphere and Microsoft System Center are included to enable simplified hypervisor-based administration. SANsymphony-V10 and its virtual SAN variations may be deployed in a virtual machine or running natively on Windows Server 2012, using standard physical x86-64 servers.
General availability for SANsymphony-V10 is scheduled for May 30, 2014.

Wednesday 23 April 2014

Virtualisation Vendors VMware and DataCore Making Inroads on Storage Simplicity

...How Well Does Virtual SAN Work Outside of VMware Environments?

But there are still a few questions that need to be answered. For example, how does Virtual SAN work with a SAN you already have?
"It doesn't," is the simple answer, according to George Teixeira, CEO at software-defined storage (SDS) vendor DataCore Software. Unlike DataCore's SANsymphony -V software (for example), which can create virtual storage that supports existing SANs, Virtual SAN is only designed to work with VMware environments, Teixeira points out.
And how does Virtual SAN support presenting its storage to any other system other than ESX? Again, Teixeira says that the answer is that it doesn't — unlike SANsymphony-V, of course, which can work with any combination of hypervisor environments, plus the real physical world.
There's little doubt that VMware will bring its marketing muscle (and software expertise) to bear on Virtual SAN, and that plenty of VMWare customers will start using it. Probably much to the chagrin of traditional SAN vendors and administrators.
But Teixeira believes that there will be benefits for software-defined storage software vendors like DataCore as well. He's counting on the idea that once VMware has educated the market about the benefits of software-defined storage, enterprises will want to investigate other solutions that support heterogeneous hypervisor environments. Not to mention good old fashioned what-you see-is-what-you-get physical servers as well.

Monday 21 April 2014

eWeek on DataCore's Software Defined Storage Survey

Software-defined networking has earned a large share of press coverage when it comes to trendy IT topics, and with all the new demands now being foisted on networks, the attention certainly is deserved. But software-defined storage is also a major part of the big new distributed IT picture, and similar issues involving security, high data volumes, and management continue to disrupt systems old and new.

Software-defined storage is data center infrastructure that is managed and automated by intelligent software as opposed to the storage hardware itself.
In an effort to try and understand these problems, Fort Lauderdale, Fla.-based storage software provider DataCore each year conducts a survey of global IT professionals to get more specific about current storage challenges that enterprises are facing and find out what market forces are driving demand for SDS.

This year's "State of Software-Defined Storage 2014" report, released April 17, indicates that these IT managers expect SDS to simplify management (26 percent) of their "isolated islands" of storage devices, enable them to reduce disruptions (30 percent), better protect investments (32 percent) and future-proof their infrastructure (21 percent) to absorb new technologies such as solid-state flash media.

A Lot to Ask for Software to Accomplish
That's a lot to ask from a particular set of IT tools and products, but that's what the market is saying.
DataCore, of course, has more than a passing interest in this topic, since its SANsymphony-V storage virtualization software is, in fact, software-defined, deploying standard Windows servers and using off-the-shelf hard or flash disks to provide a high level of shared file services at a lower price point than most bigger-name vendors.

Here are some of the key data points in this year's DataCore research, a targeted survey that included insights from 388 highly placed storage managers and administrators.
--Organizations look for software-defined storage (SDS) to both simplify management (26 percent) of their incongruous storage devices and enable them to future proof their infrastructure (21 percent).
--More than half the respondents (63 percent) said that they currently have less than 10 percent of capacity assigned to flash storage.
--The two main factors that impede organizations from considering different models and manufacturers of storage devices were the plethora of tools required to manage them (41 percent) and the difficulty of migrating between different models and generations (37 percent).
--Thirty-nine percent of respondents said that they don't run into these concerns because independent storage virtualization software allows their organizations to pool different devices and models from competing manufacturers and manage them centrally.
--Nearly 40 percent of respondents said that they were not planning to use flash or solid-state disks for server virtualization projects due to cost concerns.
--When asked how serious an obstacle performance degradation or the inability to meet performance expectations was when virtualizing server workloads, 23 percent of respondents ranked it as the most serious obstacle and 32 percent viewed it as somewhat of an obstacle to virtualization.
--Similar to last year, both the ability to enable storage capacity expansion without disruption (30 percent) and the improvement of disaster recovery and business continuity practices (32 percent) ranked highest for reasons that organizations deployed storage virtualization software.

Why SDS Can Be an Effective Approach
The bottom line, according to the findings, is that software-defined storage, if installed and automated correctly, can be an effective tool to stop the proliferation of separate storage islands within IT infrastructures. The added benefit is that most legacy hardware can be kept online, with only the software needing updating.
"One of the biggest and most frustrating IT problems organizations face is the difficult task of managing diversity [of data] and migrating [data] between different vendors, models and generations of storage devices—which prevent them from entertaining more attractive alternatives from competing suppliers," said DataCore President and CEO George Teixeira.

"Software-defined storage is not only designed to help organizations pool all of their available storage assets, but it allows organizations the ability to manage end-to-end and to add any type of storage asset to their existing storage architecture."

Monday 14 April 2014

Data Storage Placement Host-side or SAN-side: Which side are you on?

Storage Magazine Article: Which side are you on?

DataCore's Augie Gonzalez considers both sides of the storage placement argument and concludes that maybe we don't have to take sides at all

There is a debate raging as to where data storage should be placed: inside the server or out on the storage area network (SAN). The split between the opposing views of the network grows wider each day. The controversy has raised concerns among the big storage manufacturers, and will certainly have huge ripple effects on how you provision capacity going forward.

20 years ago, SANs were a novelty. Disks primarily came bundled in application servers - what we call Direct Attached Storage (DAS) - reserved to each host. Organisations purchased the whole kit from their favourite server vendor. DAS configurations prospered but for two shortcomings; one with financial implications and the other affecting operations.

First, you'd find server farms with a large number of machines depleted of internal disk space, while the ones next to them had excess. We lacked a fair way to distribute available capacity where it was urgently required. Organisations ended up buying more disks for the exhausted systems, despite the surplus tied up in the adjacent racks.

The second problem with DAS surfaced with clustered machines, especially after server virtualisation made virtual machines (VMs) mobile. In clusters of VMs, multiple physical servers must access the same logical drives in order to rapidly take over for each other should one server fail or get bogged down.
SANs offer a very appealing alternative - one collection of disks, packaged in a convenient peripheral cabinet where multiple servers in a cluster can share common access. The SAN crusade stimulated huge growth across all the major independent storage hardware manufacturers including EMC, NetApp and HDS and it also spawned numerous others. Might shareholders be wondering how their fortunes will be impacted if the pendulum swings back to DAS, and SANs fall out of favour?

Such speculation is fanned by the dissatisfaction with the performance of virtualised, mission-critical apps running off disks in the SAN, which lead directly to the rising popularity of flash cards (solid state memory) installed directly on the hosts.

The host-side flash position seems pretty compelling; much like DAS did years ago before SANs took off. The concept is simple; keep the disks close to the applications and on the same server. Don't go out over the wire to access storage for fear that network latency will slow down I/O response.
The fans of SAN argue that private host storage wastes resources and it's better to centralise assets and make them readily shareable. Those defending host-resident storage contend that they can pool those resources just fine. Introduce host software to manage the global name space so they can get to all the storage regardless of which server it's attached to. Ever wondered how? You guessed it; over the network. Oh, but what about that wire latency? They'll counter that it only impacts the unusual case when the application and its data did not happen to be co-located.

Well, how about the copies being made to ensure that data isn't lost when a server goes down? You guessed right again: the replicas are made over the network.
What conclusion can we reach? The network is not the enemy; it is our friend. We just have to use it judiciously.

Now then, with data growth skyrocketing, should organisations buy larger servers capable of housing even more disks? Why not? Servers are inexpensive, and so are the drives. Should they then move their Terabytes of SAN data back into the servers?

For many organisations, it makes perfect sense to have some storage inboard on the servers up close to the applications, augmented by some externally shared storage on premise, and the really bulky backups in the public cloud - especially those requiring long-retention. The problem for those taking sides is they refuse to accept the other alternatives. And so it's all about picking one location versus the other.

What if, instead of choosing sides, one designs software to leverage storage assets in all three places: the Server, the SAN and the Cloud, eliminating the prejudice over location? Organisations are then free to route the requests where most appropriate, and put the network to good use when it's beneficial.
Techniques like infrastructure-wide automated storage tiering put these principles into action. Really active blocks stay close to the programs servicing the requests from local flash storage, whereas infrequently used data gets directed further away over the wire.

In-memory caching plays a critical role keeping everything going smoothly. It leverages the super high speed DRAM close to the app, to mask potential downstream delays from slower hardware.
Software capable of exacting dynamic control over caches, storage placement, replicas, and thin provisioning - that's where all the intelligence come into play, and the key to distributing data appropriately across all 3 locations (server, SAN and cloud).

Don't push me to take sides. Different situations call for different approaches, that's why the industry has progressed this far. One thing is for certain, it's a debate that is set to run and run.

More info: 

Monday 7 April 2014

Italy’s Ministry of Economy and Finance modernises its storage infrastructure with DataCore Software Defined Storage

The “Ministero dell'Economia e delle Finanze” (the Italian Ministry of Economy and Finance, MEF) located in Rome has deployed DataCore’s SANsymphony-V solution to improve productivity and modernise its mission-critical IT infrastructure. The DataCore software-defined storage platform has been implemented to centralise storage management and improve the utilisation across a wide range of storage hardware systems and devices from different vendors including EMC and HP. DataCore consolidates and simplifies the provisioning of storage resources, significantly accelerates performance and adds high availability and a new level of flexibility to the existing mix as well as to future storage additions.

The Ministry of Economy and Finance, also known by the acronym MEF, is one of the most important and influential ministries within the Italian Government. It is the executive body responsible for economic, financial and budget policy. The organisation manages the planning of public investments, coordinates public expenditures and verifies its trends, revenue policies and the overall tax system. The MEF operates the State’s public land and heritage, land register and customs; it plans, coordinates and verifies operations to foster economic, local and sectorial development, and is responsible for setting out cohesive policies, processes and the requirements pertaining to the public budget.

As part of the project to optimise and consolidate IT data centres, MEF selected DataCore’s software-defined storage solution with the primary objective of preserving their existing and very diverse set of storage investments made over many years - comprising a range of systems from EMC VMax, EMC Centera to HP EVAs. In addition, it was critical for MEF to streamline and centralise management in order to gain productivity and to provision highly available storage capacity when needed, where needed quickly within minutes.

After evaluation of a number of industry solutions, MEF decided it needed to implement the DataCore storage virtualization as the best fit for their various and demanding requirements. MEF’s decision for DataCore’s Software-defined Storage approach was based on a number of factors, one being that SANsymphony-V transforms storage into an enterprise-wide resource that can be pooled and used more efficiently than hardware-driven SAN approaches where each storage system creates a separate inefficient island of storage. The DataCore software importantly also optimizes overall utilisation and makes storage provisioning dynamic and automatic – a rapid process versus a time consuming and complicated task that in the past often took days, if not weeks, to accomplish.

SpeedCrew technology partners, an authorised and trained DataCore software solution provider with a highly skilled team and a long history of IT field experience, designed and implemented the project. SpeedyCrew deployed DataCore SANsymphony-V on four standard x86 server platforms, providing redundancy and data protection together with centralised management of over 200TB of storage across multiple EMC VMax and EMC Centera and HP EVA systems. MEF also will benefit from the addition of high end advanced storage features including thin provisioning, metro-wide mirroring, high speed adaptive caching, replication, auto-tiering, all of which can be applied their existing and future storage investments.

For high availability and business continuity all relevant data is synchronously mirrored across buildings and departments between the DataCore nodes. When one server is down, the remaining nodes take the workloads (auto-failover) until the system is up and resynchronised (auto-failback) automatically. To accelerate the performance of the underlying hardware, DataCore leverages the Random Access Memory (RAM) in each node. This allows for fast ‘in-memory’ high speed caching to accelerate the workloads for MEF’s business critical applications. DataCore has also made it easy for MEF to scale out and grow dynamically in the future by allowing it to add storage hardware of their choice of vendor, model or technology including flash SSD resources when they may be required.

"We chose DataCore because we wanted a solution that would allow MEF to modernise and virtualise our storage and IT infrastructure without being locked- in to specific hardware vendors or technologies. This gives us flexibility, scalability and freedom in our choice. SANsymphony-V allows us to use the most appropriate and innovative offerings on the market and makes it easy if needed to grow or adapt our environment to meet future requirements. DataCore not only reduces our storage related costs by consolidating management, it enables us to buy less expensive hardware and it also enables us to protect our existing investments. Moreover, the Software-defined layer in our infrastructure gives us flexibility to optimize whatever we use and puts us back in control to shop storage for the best value to allow us to cost-effectively deal with growth”, comments the information systems team at MEF.

Wednesday 2 April 2014

Menora Foods Achieves 100% Uptime, Lower Costs and Faster Performance with DataCore Software-Defined Storage

DataCore’s Software Defined Storage Platform Delivers Increased Application Performance and Reduced Costs As a Result of SANsymphony-V

Menora Foods, Australia’s leading food marketing and distribution business, has successfully implemented DataCore’s SANsymphony-V storage virtualization platform. Backed by DataCore, Menora Foods software-defined storage architecture can now ensure that their critical business applications, including the Menora-developed ERP system, the company’s warehousing systems, distribution systems and more, remain available and don’t interrupt business productivity.
“DataCore’s SANsymphony-V storage virtualization platform protects and mirrors all of our vital storage and VMs across our campus-wide sites and has made our lives far easier,” explained Vikash Reddy, IT Manager at Menora Foods. “Now we can sleep peacefully without the worry of our systems failing as SANsymphony-V provides the high availability and business continuity results we were seeking.”
Menora Foods owns and distributes some of Australia’s favorite and most trusted brands. The leading food marketing and distribution company imports food products from many different countries and distributes them to major retail supermarket chains. Without SANsymphony-V, Menora Foods risked physical disk failure. Prior to DataCore, if a server went down a day or two would be spent trying to restore the primary branch – and secondary branches would consequently fail. The result were damaging to the company’s overhead in production and cost. All of Menora Foods’ data and VMs are now being supported by SANsymphony-V storage.
“With DataCore’s virtualized storage platform, we noticed an increase in performance on the report coming out of our ERP system. When the ERP system was on physical storage, some of the reports were noticeably slow to generate,” explained Vikash. “However, with DataCore in place there has been around a 30% increase in speed and time. The performance has definitely increased and DataCore was the difference.”
DataCore’s SANsymphony-V has increased Menora Foods’ uptime to 99.999%. Two servers are now implemented on DataCore’s virtualization platform and synchronously mirrored resulting in all systems being accessible and operational, eliminating downtime if one server room powered off. According to Menora Foods, the cost savings between DataCore versus traditional hardware SAN systems was somewhere between $60,000 and $100,000. Menora Foods will continue to cut costs in the future due to the company’s ability to utilize DataCore on existing hardware without upgrading to new models every three to five years, as compared to a traditional SAN vendor.
“As the largest food distributor in Australia, Menora Foods needs to ensure that their critical business applications are always on and running at peak performance,” said Steve Houck, COO at DataCore. “By leveraging DataCore’s storage virtualization platform, Menora Foods has noticeably increased application uptime and performance, all while reducing their storage related costs.”