Translate

Thursday 29 May 2014

DataCore Announces General Availability of SANsymphony-V10, the Next Generation Software-Defined Storage Services Platform

Congratulations to the entire DataCore family!

Another proud day in the life of the company, as we announce the general availability of our: Next Generation SANsymphony-V10 Software-Defined Storage Services Platform and Enterprise-Class Virtual SANs

Customer downloads and trials are now available and in place, so we are pleased to spread the word.

Also, check out the latest DataCore SANsymphony-V10 Storage Virtualization Overview Video and the updated SANsymphony-V10 Web Page: One Storage Services Platform Across your entire infrastructure

This tenth generation of SANsymphony-V storage virtualization software celebrates sixteen years of research and development, and countless inputs from DataCore customers and partners to yield the most comprehensive storage services platform in the planet. 

We are proud of our new release and we thank you all for making 2014 a special year for us !!

Regards,
George Teixeira
CEO & President 
DataCore Software

Tuesday 27 May 2014

Software Defined Storage Comes of Age - Interview with George Teixeira, CEO DataCore Software

"Long term vision is a thing we call data anywhere."

For Teixeira's thoughts on Software-defined Storage, the latest trends and DataCore, please see: DataCore Surpasses 10,000 Customer Sites Globally as Companies Embrace Software-Defined Storage
“The remarkable increase in infrastructure-wide deployments that DataCore experienced highlights an irreversible market shift from tactical, device-centric acquisitions to strategic software-defined storage decisions. Its significance is clear when even EMC concedes the rapid commoditization of hardware is underway. The EMC ViPR announcement acknowledges the ‘sea change’ in customer attitudes and the fact that the traditional storage model is broken,” said George Teixeira, president and CEO at DataCore.
“We are clearly in the age of software defined data centers, where virtualization, automation and across-the-board efficiencies must be driven through software. Businesses can no longer afford yearly ‘rip-and-replace’ cycles, and require a cost-effective approach to managing storage growth that allows them to innovate while getting the most out of existing investments.” 
According to George Teixeira, the momentum for Software-defined Storage continues and the recent VMware VSAN announcements have opened up new market opportunities and use cases that showcase solutions that DataCore has pioneered and developed over the last 16 years. He states, "Our New SANsymphony-V10, the 10 generation of our software sets the standard for Software-Defined Storage":  DataCore Announces Enterprise-Class Virtual SANs and Flash-Optimizing Stack in its Next Generation SANsymphony-V10 Software-Defined Storage Platform
For more insights, please see the excerpts below from the recent StorageNewsletter interview with George Teixeira, CEO DataCore Sofware or read the full article at:
George, do you now embrace totally the three words software-defined storage (SDS) in your marketing approach following the buzz.
Our mission statement when DataCore was founded in 1998 was to create 'software-driven storage architecture'. But, if we had called it 'software-defined' then the industry probably would have called it 'software-driven'; we went with 'software-driven' so the now the industry went 'software defined'. We have always been true to our mission of developing a pure software approach and we have been doing it for over 16 years and have thousands of happy customers to show for it...
Why not use the term storage virtualization software or storage hypervisor?
For a long time we had to evangelize the idea of software. The industry wanted to still buy and think in terms of hardware, the mindset has been hardware biased. We kept trying to find ways to associate with software. Storage virtualization was the correct term in the beginning but when we started storage virtualization as a term meant what today they're calling 'software-defined', meaning it could not be sold as part of the hardware, it was totally hardware independent. Instead all the storage hardware vendors hijacked the term. So all of a sudden IBM SVC, Hitachi, everybody began to use the term even if it was not downloadable software and required a customer to buy specific vendor hardware to make it run. 

Then, a few years back, because of the success of VMware, I said "let's start using storage hypervisor" so that we could at least talk to the people who were using VMware, Microsoft and understood the software layer was separate from hardware. But the term had a short life. The big change (turning point in mindset) was early last year or right before when EMC announces ViPR and they start talking software-defined storage. The largest player in the storage market says two big things: commodity and software are the future of storage. And then VMware comes along and announces Virtual SAN, again promoting a pure software approach.

Two huge companies, and EMC the number one storage company in the world, say the future is about commodity and software. All of a sudden, my business, and the interest and opportunity in DataCore has gone up by 50%. Because people are looking up ViPR and VSAN and searching to find out "what else is out there?" and they find us. But the funny part is that, these terms, in fact they are all crazy, since we've been doing the same thing for 16 years.

What is the roadmap for SANsymphony storage virtualization software?
The long-term vision is a thing we call 'data anywhere'.
The idea is that we've been able to do a lot of things with SANsymphony V-10 especially where the data can move from being on the server side to the SAN, into the cloud. And we believe that there is so much different types of storage -its a specturm from Flash to disk to Cloud storage, and that companies will have different types. And the funny part is that every vendor talks convergence but if you look at what they do, they're selling divergence.

Today everybody is creating 'isolated storage islands'. And they only work within their own stuff, our features and software work across all of them. Our view is, we UNIFY all these isolated storage islands.

DataCore was formed in 1998 and you've been its president and CEO since the very first day. For how much longer?
I still find it exciting. What's amazing now is that the vision we had has become real; I don't have to explain what it is anymore. However, it  took three times longer than I thought to get to this point...

Thursday 22 May 2014

The IT Businesse Edge: DataCore Empowers Flash and Creates Virtual SANs Out of Server Storage

http://www.itbusinessedge.com/blogs/it-unmasked/datacore-creates-virtual-san-out-of-server-storage.html

One of the benefits of using a storage management application is that the location of the actual storage device doesn’t matter all that much.

Taking advantage of that benefit, DataCore’s release of SANsymphony-V10 has added a Virtual-SAN capability that turns all the Flash memory and magnetic storage attached to a server into a shared resource for the applications that access that server.
As of late, there has been a lot of interest in Flash storage on a server, and DataCore CEO George Teixeira notes that most of the Flash storage deployed today on a server is allocated to a single application. Teixeira says that this is a comparatively expensive way to solve an application performance issue because most of the time the Flash storage device is just sitting idle.
SANsymphony-V10 turns Flash and magnetic storage on the server into a resource that can be shared by a cluster of 32 servers, accessing up to 32PB of storage at speeds of up to 50 Million IOPS, says Teixeira.

DataCore is not the only vendor to have turned Flash memory into a shared resource. But Teixeira says the rise of Flash validates a software-defined approach to storage that allows IT organizations to mix and match third-party storage devices as they see fit.

Of course, DataCore would be the first to admit that most IT organizations have only a fraction of their data running in Flash memory. But the percentage of the data that does run in Flash tends to be really hot in terms of how often it needs to be accessed across what is usually a broad application portfolio.
For more details please see:

Wednesday 21 May 2014

Check out the new DataCore Labs and Catching the Software-Defined Storage Wave

New  http://www.datacorelabs.com/  web site!

Catching the Software-Defined Storage Wave

    • 6 days ago
    •  
    • 32 views
    Dealing with data storage pain points? Join our technical expert, Jeff Slapp, and learn why a true Software-Defined Storage solution is ideal for improving application performance, managing diversi... 

Best Practices for Creating Highly Available Architectures

    • 2 weeks ago
    •  
    • 44 views
    Got High Availability for data storage? Join DataCore LABS and Tim Warden, Senior Solutions Architect at DataCore Software, as he discusses and presents his findings from years in the field on how ... 

Pain Meds for Data Migration Headaches

    • 3 weeks ago
    •  
    • 14 views
    Ben Treiber, Director of Strategic Systems Engineering at DataCore Software, brings over 15 years of experience helping customers migrate data. Ben is an expert at solving some of the toughest data... 

Flash - The Inside Story

    • 1 month ago
    •  
    • 118 views
    Got flash? Join DataCore LABS and Dr. Jon Toigo noted Data Scientist, Author, and Researcher, as he reviews the strategic long term outlook for deployment of Flash and SSD technologies. During this... 

Pain Meds for Data Migration Headaches

    • 1 month ago
    •  
    • 137 views
    Ben Treiber, Director of Strategic Systems Engineering at DataCore Software, brings over 15 years of experience helping customers migrate data. Ben is an expert at solving some of the toughest data... 

Why All the Buzz Around Software Defined Storage?

    • 3 months ago
    •  
    • 112 views
    Paul Murphy explains why the idea of software-defined storage has picked up steam in the storage industry and how DataCore delivers 

Friday 16 May 2014

Storage is in the midst of disruption. Which side are you on?

Where should your data storage be placed: inside the server or out on the storage area network (SAN)?

DataCore recently announced SANsymphony-V10, the company’s 10th generation software platform deployed at over 10,000 customer sites around the world. The key problem that we looked to addressed in this release was not fully covered in the press which tends to focus on specific features. Instead, the main aim of this release was to deal with the problem of separate and isolated storage islands caused by the incompatibilities of different storage vendor offerings and the use of Flash which has re-driven the need for server-side storage. This and the diversity of new technologies has disrupted the storage market. Today, the storage world consists of a spectrum of storage approaches and devices and what is needed is unification and common management that can work across different vendor platforms and technologies. This is one of the key challenges that true software-defined storage architectures must resolve.

What is needed is a solution that transcends across all the different platforms? A platform that provides end-to-end storage services which can reconcile Virtual SANs, Converged Appliances, Flash Devices, Physical SANs, Networked and Cloud Storage from Becoming ‘Isolated Storage Islands’.

IDC's Nick Sundby states the problem in the following release as follows:

“It’s easy to see how IT organizations responding to specific projects could find themselves with several disjointed software stacks – one for virtual SANs for each server hypervisor and another set of stacks from each of their flash suppliers, which further complicates the handful of embedded stacks in each of their SAN arrays,” said IDC’s consulting director for storage, Nick Sundby. “DataCore treats each of these scenarios as use cases under its one, unifying software-defined storage platform, aiming to drive management and functional convergence across the enterprise.” 

Flash and new storage technologies are driving a ‘Rethink’ on how we deal with storage and its growth and diversity; storage is no longer just mechanical disk drives, it now encompasses a range of devices from Flash-memory to, Virtual SANs, to SANs to Cloud Storage.



The following article overviews how DataCore is tackling the issue of 'isolated storage islands':

But, let’s examine further the issue of data storage placement which with the advent of Flash technologies has become a major question.

DataCore's Augie Gonzalez recently wrote an interesting piece ‘Which side are you on?’ , which covers the trade-offs of server-side or SAN-side, the article appears below:

Augie asks us to considers both sides of the storage placement argument and concludes that maybe we don't have to take sides at all.
There is a debate raging as to where data storage should be placed: inside the server or out on the storage area network (SAN). The split between the opposing views of the network grows wider each day. The controversy has raised concerns among the big storage manufacturers, and will certainly have huge ripple effects on how you provision capacity going forward.

DAS BACK IN THE LIMELIGHT 
25 years ago, SANs were a novelty. Disks primarily came bundled in application servers - what we call Direct Attached Storage (DAS) - reserved to each host. Organizations purchased the whole kit from their favorite server vendor. DAS configurations prospered but for two shortcomings; one with financial implications and the other affecting operations.

First, you'd find server farms with a large number of machines depleted of internal disk space, while the ones next to them had excess. We lacked a fair way to distribute available capacity where it was urgently required. Organizations ended up buying more disks for the exhausted systems, despite the surplus tied up in the adjacent racks.

The second problem with DAS surfaced with clustered machines, especially after server visualization made virtual machines (VMs) mobile. In clusters of VMs, multiple physical servers must access the same logical drives in order to rapidly take over for each other should one server fail or get bogged down.

SANs offer a very appealing alternative - one collection of disks, packaged in a convenient peripheral cabinet where multiple servers in a cluster can share common access. The SAN crusade stimulated huge growth across all the major independent storage hardware manufacturers including EMC, NetApp and HDS and it also spawned numerous others. Might shareholders be wondering how their fortunes will be impacted if the pendulum swings back to DAS, and SANs fall out of favor?

Such speculation is fanned by the dissatisfaction with the performance of virtualized, mission-critical apps running off disks in the SAN, which lead directly to the rising popularity of flash cards (solid state memory) installed directly on the hosts.

HOST-SIDE VIEWPOINT
The host-side flash position seems pretty compelling; much like DAS did years ago before SANs took off. The concept is simple; keep the disks close to the applications and on the same server. Don't go out over the wire to access storage for fear that network latency will slow down I/O response.
The fans of SAN argue that private host storage wastes resources and it's better to centralize assets and make them readily share-able. Those defending host-resident storage contend that they can pool those resources just fine. Introduce host software to manage the global name space so they can get to all the storage regardless of which server it's attached to. Ever wondered how? You guessed it; over the network. Oh, but what about that wire latency? They'll counter that it only impacts the unusual case when the application and its data did not happen to be co-located.

Well, how about the copies being made to ensure that data isn't lost when a server goes down? You guessed right again: the replicas are made over the network.

What conclusion can we reach? The network is not the enemy; it is our friend. We just have to use it judiciously.

Now then, with data growth skyrocketing, should organizations buy larger servers capable of housing even more disks? Why not? Servers are inexpensive, and so are the drives. Should they then move their Terabytes of SAN data back into the servers?

Please see how DataCore is addressing the above issues with its latest release: DataCore SANsymphony-V10