Translate

Thursday 20 December 2012

2013 – The Year ‘Software-defined,’ Flash technologies and Virtualization of Tier 1 Applications Transform the Enterprise Storage World

Flash technologies and virtualization of Tier 1 applications are shaking up the enterprise world.

By George Teixeira, President and CEO, DataCore Software
During the past year, we have seen the first steps in a move to software-defined architectures. This has spurred a number of critical trends that are reshaping and having a major impact in the enterprise storage world, setting the stage for 2013 to become the year software-defined storage transforms the data center.

The move from hardware- to a software-defined virtualization-based model supporting mission-critical business applications has changed the foundation of architectures at the computing, network, and storage levels from being “static” to “dynamic.” Software defines the basis for agility, user interactions, and for building a long-term virtual infrastructure that adapts to change. The ultimate goal is to increase user productivity and improve the application experience.


Trend #1: Tier 1 apps will go virtual and performance is critical

Faster After VirtualizationEfforts to virtualize more of the data center continue and we will see an even greater surge to move Tier1 applications (ERP, databases, mail systems, OLTP etc.) onto virtualization platforms. The main driver for this is better economics and greater productivity.

However, the major roadblocks to virtualizing Tier1 apps are largely storage related.

Moving storage-intensive workloads onto virtual machines (VMs) can greatly impact performance and availability. Therefore, storage has to be over-provisioned and oversized. Moreover, as businesses consolidate onto virtual platforms, they have to spend more to achieve the highest levels of redundancy to ensure no down time and business continuity in addition to worrying about performance bottlenecks.

The high costs and complexities of oversizing negate the bulk of the benefits. With this in mind, enterprises and IT departments are looking for a smarter, more cost-effective approach (ie smart software), realizing that the traditional “throw more hardware at the problem” tact is no longer practical.

Trend #2: SSD flash technologies will be used everywhere; storage is not just disk drives

Another major trend related to virtualizing Tier 1 applications is the proliferation of SSD flash-based technologies. The reason is simple. Disk drives are mechanical rotating devices and not as fast as those that are based on high-speed electronic memory technologies.

Flash memory has actually been around for years; it was previously way too expensive for broad adoption. Though still more costly than rotating hard drives, their use in tablets and cell phones are driving prices downward. Even so, flash wears out, and taxing applications that prompt many writes can impact their lifespan.

Yet, flash devices are an inevitable part of our future and need to be incorporated within our architectural thinking. The economics are already driving us to a world that requires different tiers of fast memory-based storage and less expensive, slower disk drives. This, in turn, increases the demand for enterprise-wide auto-tiering software able to optimize the performance and cost trade-offs by placing and moving data to the most cost-effective tier that can deliver acceptable performance.

Trend #3: More storage will require more automation

There is a constant and insatiable demand for more data storage; it continues to grow more than 50 percent per year. However, the need is not just for more hardware disks to meet raw capacity. Instead, users want automatic, self-managed storage, scalability, rapid provisioning, fast performance, and the highest levels of business continuity.

Again, it takes “smart software” to simplify and automate the management of storage.

Trend #4: Software-defined storage architecture will matter more than hardware

These trends -- and empowering IT users to make storage hardware interchangeable within virtual infrastructures -- will have a profound impact on how we think about, buy, and use storage. In 2013 and beyond, IT will need to embrace software-defined storage as an essential element to data centers.

As users deal with the new dynamics and faster pace of today’s business, they can’t be trapped within yesterday’s rigid and hard-wired architectures. Infrastructure is constructed on three pillars -- computing, networking, and storage -- and in each, hardware decisions will take a back seat to a world dictated by software and driven by applications.

Clearly, the great success of VMware and Microsoft Hyper-V demonstrates that there’s compelling value delivered by server virtualization. Likewise, the storage hypervisor, and virtualization at the storage level, are critical to unlocking hardware chains that have made storage an anchor to next-generation data centers.

Trend #5: Software-defined storage is creating the need for a storage hypervisor

The same thinking that changed our views about the server is needed to address storage, and smart software is the catalyst. In effect, a storage hypervisor’s main role is to virtualize storage resources to achieve the same benefits -- agility, efficiency, and flexibility -- that server hypervisor technology brought to processors and memory.

This year, software will take its rightful seat at the table and begin to transform the way we think about storage.

The Ultimate Goal: Better App Experience through Software-defined Storage

Virtualization has changed computing and the applications we depend upon to run our businesses. Still, enterprise and cloud storage are dominated by physical and hardware-defined mindsets. We need to change our thinking and consider how storage impacts the application experience and view storage as software-defined, with storage services and features available enterprise-wide and not just embedded to a proprietary hardware device.

Why would you buy specific hardware just to get a software feature? Why would you restrict a feature to a single platform versus using it across the enterprise? This is old thinking, and prior to virtualization, that’s how the server industry worked. Today, with VMware or Hyper-V, we think about how to deploy VMs versus “are they running on a Dell, HP, Intel, or IBM system?”

Storage is going through a similar transformation, and in the year ahead, it will be smart software that leads the industry into a better, software-defined world.

Tuesday 18 December 2012

Jon Toigo Video on Disaster Recovery and Business Continuity: Use a storage hypervisor for data replication

Toigo Video

Toigo: Use a storage hypervisor for data replication

Video Link: http://searchdisasterrecovery.techtarget.com/video/Toigo-Use-a-storage-hypervisor-for-data-replication

In this video, Jon Toigo, founder and CEO of Toigo Partners International, says that relying on a storage hypervisor like DataCore's SANsymphony-V provides greater ease of management and greater resistance to disruption.

Toigo said that storage environments have isolated islands of capability on hardware from different vendors and software products that add functionality for services like provisioning and mirroring, but these capabilities don't scale as you need more capacity and have to purchase another array. But the hardware behind the different vendor's product names is "generic," said Toigo, and a storage hypervisor can bring those disparate elements together by presenting storage as logical volumes rather than physical volumes -- allowing users to pool disparate hardware together into a common storage repository.

And for disaster recovery, storage virtualization can be a real benefit because you do not need to replicate between identical devices. "I could make a copy of data over to any other platform. I don't have to use the most expensive gear to make a copy of the most expensive gear … it doesn't matter, because it's all virtual volumes that we're dealing with," said Toigo...

Friday 7 December 2012

Novarion Launches its New Line of PlatinStor® Storage Systems Fully Integrated with DataCore Software Storage Hypervisor Technology

Collaboration produces cost-effective high-end storage solutions

Novarion, a system builder and manufacturer of high-end server and storage solutions, and DataCore Software today announced that they have worked together to develop a new  line of storage solutions based on DataCore's SANsymphony-V storage hypervisor technology. The new Novarion PlatinStor® systems are configured to address the needs of businesses and the public sector with the highest standards of reliability, performance and availability. The new systems are available European-wide and are being marketed with an emphasis on how they simplify storage management within large IT environments and the ease with which they integrate with existing storage systems.

The PlatinStor enterprise storage hardware from Novarion, together with the storage virtualization and hypervisor software technology from DataCore, deliver the ultimate performance solution to the storage market. In particular, large organizations will benefit from the systems modular design and additional features such as performance acceleration through self-adaptive caching, automated storage tiering, unified storage with enables SAN and NAS integration, continuous data protection, snapshots, analysis tools, synchronous mirroring, replication, capacity optimization (thin provisioning), resource virtualization, centralized SAN management, and the ability to integrate easily into existing SAN environments,

"We have developed these new storage systems to meet the highest levels of performance, quality and reliability. Together, the combination of the DataCore™ SANsymphony-V storage hypervisor and Novarion’s resilient and modular hardware design deliver to market a compelling high performance solution. We observed, during the certification process and from our own testing, a two- to seven-fold performance improvement over competitive products,” says Georg Gesek, managing director at Novarion IT Service GmbH.

Novarion PlatinStor ® is offered in four models; each integrated with the DataCore’s SANsymphony-V and they are available with choices of fiber channel, iSCSI or Infiniband connectivity.

  • PlatinStor ® EX 2 - "One-Box Enterprise HA SAN"
    This cost-effective, highly available solution comes with two independent SAN nodes and two built-in disk stacks to provide a maximum level of data protection, performance and reliability.
  • PlatinStor ® EX 3 - "Storage-Header for Enterprise SAN"
    The redundant storage header provides significant performance acceleration for existing enterprise HA storage solutions by providing intelligent caching and advanced tiering technology.
  • The PlatinStor ® EX 4 - "Enterprise SAN for vast storage requirement"
    The comprehensive Enterprise-HA-SAN environment provides virtually unlimited scalability of capacity and I/O performance thanks to its innovative grid design.
  • PlatinStor ® EX 5 - "All-In-One & SAN HA Application Cluster"
    The highly available one-box HA solution provides SAN system to support application clusters; itis an excellent solution to set up a low-cost "private cloud".

Pricing for PlatinStor® systems starts at 20,000 Euros. For more information please visit the website at http://www.novarion.com/

"Novarion is one of the few premier European manufacturers of high-end systems and an excellent technology partner. The DataCore storage hypervisor has been designed and optimized for larger data centers and cloud environments. Novarion has the reputation for delivering high quality and reliable hardware solutions. The combination makes sense. Moreover, partners and customers benefit from having complete integrated solutions that can meet the highest quality requirements required by today’s modern data centers”, says Siegfried Betke, director of business development, Central Europe at DataCore Software.

Monday 3 December 2012

Video Interview: DataCore Software and Dell in EMEA; Storage Virtualization and Auto-Tiering optimises and works over all Dell Storage Product Lines


Link: http://truebittv.truthinit.com/media-gallery-media/mediaitem/232

W. Curtis Preston discusses with Pierre Aguerreberry, EMEA Alliances Manager for DataCore, at the Dell Storage Forum in Paris, France. They discuss SANsymphony-V storage hypervisor and its virtualization layer that can work over any storage the customer has installed or chooses to buy (Dell server, Compellent, Equallogic, PowerVault, EMC, Fusion-io, etc.). They discuss how DataCore has over 8,000 customers using their product...

Thursday 29 November 2012

High Performance Storage Virtualization and Streamlining Virtualizing Tier 1 Apps

By Steve Houck

The magic to making tier 1 apps perform happens in the adaptive technology known as high-performance storage virtualization. http://virtualizationreview.com/articles/2012/11/14/vv-streamline-tier-1-apps.aspx?sc_lang=en

Back in 2010 when I was at VMware, I would have bet you money that within months, the virtualization movement was going to sweep up enterprise apps with the roar of an unabated forest fire. But it didn't.

What seemed like fait accompli at the time turned out to be far more elusive than any of us could have predicted. The naïve, invigorated by the thrill of consolidating dozens of test/development systems in a weekend, bumped hard against a tall, massive wall. On the vendor side, we fruitlessly threw more disk drives, sheet metal, and plumbing at it. The price climbed, but the wall ceded not.

Fast forward to late 2012. Many still nurse their wounds from those attempts, unwilling to make another run at the ramparts which keep Tier 1 apps on wasteful, isolated servers until someone they trust gets it done first. To this day, they put up with a good deal of ribbing from the wise systems gurus, who enjoy reminding us why business critical apps absolutely must have their own, dedicated machines.

The seasoned OLTP consultants offer a convincing argument. Stick more than one instance of a heavily loaded Oracle, SQL Server or SAP image on a shared machine and all hell breaks loose. You might as well toss out the secret book on tuning, because it just doesn't help.

To some degree, that's true, even though the operating systems and server hypervisors do a great job of emulating the bare metal. It's an I/O problem, Mr. Watson.

It's an I/O problem indeed, into and out of disks. Terms like I/O blending don't begin to hint at the complexity and chaos that such consolidation introduces. Insane collisions at breakneck speeds may be more descriptive. Twisted patterns of bursty reads queued up behind lengthy writes, overtaken by random skip-sequential tangents. This is simply not something one can manually tune for, no matter how carefully you separate recovery logs from database files.

That's before factoring in the added pandemonium when the shared array used by a DB cluster gets whacked by a careless construction worker, or a leaky pipe drips a little too much water on the redundant power supplies.

Enter the adaptive technology of high-performance storage virtualization. Whether by luck or design, the bedlam introduced when users collapse multiple enterprise apps onto clustered, virtualized servers, mirrors the macro behavior of large scale, discreet servers banging on scattered storage pools. The required juice to pull this off spans several crafts. A chunk of it involves large scale, distributed caching. Another slice comes from auto-sensing, auto-tuning and auto-tiering techniques capable of making priority decisions autonomically at the micro level. Mixed in the skillset is the mysterious art of fault-tolerant I/O re-direction across widely dispersed resources. You won't find many practitioners on LinkedIn proficient in cooking up this jambalaya. More importantly, you won't have to.

In the course of the past decade, this enigmatic mojo and the best practices that surround it have been progressively packaged into a convenient, shrink-wrapped software stack. To play off the similarities with its predecessors, the industry calls it a storage hypervisor.

But I stray. What owners of business critical apps need to know is that they can now confidently virtualize those enterprise apps without fear of slow erratic service levels, given, of course, that they employ a high-performance, fully redundant storage hypervisor to yield fast, predictable response from their newly consolidated environment. Instead of throwing expensive hardware at the problem, or giving up altogether, leave it to the intelligent software to manage the confluence of storage traffic that characterizes virtualized Tier 1 programs. The storage hypervisor cost-effectively resolves the contention for shared disks and the I/O collisions that had previously disappointed users. It takes great advantage of new storage technology like SSDs and Flash memories, balancing those investments with more conventional and lower-cost HDDs to strike the desired price/performance/capacity objectives.

The stuff shines in classic transactional ERP and OLAP settings, and beyond SQL databases does wonders for virtualized Exchange and SharePoint as well.

Sure, the advanced new software won't stop the veterans from showing off their scars while telling picturesque stories about how hard this was in the old days. Though, it will give the current pros in charge of enterprise apps something far more remarkable that they too can brag about -- without getting their bodies or egos injured on the way.

Wednesday 28 November 2012

Brennercom Adds DataCore Storage Hypervisor for Business Continuity and High-Performance Storage for their New VMware View Desktops and Cloud Services

Brennercom, an Italian-based Telecommunication Technology Company, has Extended its Redundant, High-availability Storage Infrastructure to Service its Private Cloud and Virtual Desktop Requirements.
http://www.it-director.com/technology/storage/news_release.php?rel=35287

DataCore Software today announced that information and communication technology (ICT) company Brennercom has attained a new level of business continuity and performance for its virtual desktop infrastructure (VDI) and cloud services using the DataCore SANsymphony™-V Storage Hypervisor.

By making use of the SANsymphony-V storage hypervisor, corporate data is centrally administrated on a variety of different hardware storage solutions and all of it is protected by DataCore’s synchronous mirroring capability. The virtualization software from DataCore required only a quarter of the investment that would have been needed for a hardware-based SAN to provide a stable, high performance VMware View VDI for 160 desktop platforms, in addition to supporting the storage needs for all of its virtual machines running on VMware vSphere.

"The virtualization and common central management provided by the DataCore storage hypervisor considerably reduced our IT division's work load. New systems can now be fully set up for users within 10 minutes, unlike the laborious, many hours and days set-up required to install physical server storage systems in the past. Fast provisioning and capacity expansions can now be easily implemented via the central console with a few mouse clicks in the event of an acute need for storage," explained Roberto Sartin, head of Technical Division at Brennercom.

Establishment of Virtual Desktop and Cloud Infrastructure
At Brennercom, internal IT services are provided by the IT Management Division. The Division decided to extend its existing virtual VMware server infrastructure and to establish a VDI and cloud services based on VMware. To handle the expansion, it was decided that a new approach to managing data storage was needed.

The primary drivers were two-fold: First, Brennercom needed to expand its external computer center to accommodate new cloud computing services. Central and efficient system administration was a core objective of this extension. Second, Brennercom needed greater high-availability due to the increased business continuity requirements that would result from its move to centralization. The project plan and investments also encompassed the need for a later partial move of a number of the systems to a second location in Trento (about 30 miles away) to ensure that the immediate, high-availability system could be backed up by a two-site disaster recovery model for the purposes of required ISO audits.

The company also had to consider the planned consolidation of its current, heterogeneous IT landscape at the Bozen site. While the central computer center services were based on a fiber channel infrastructure, some divisions were making use of iSCSI storage. Apart from vSphere virtual machines, Citrix XenServer was also used in some areas.

Cost-effective VDI with 160 desktops
The VDI with VMware View is based on an integrated system supporting virtualized storage and virtualized servers. The decision to use this platform was taken after the positive experience gained with the VMware hypervisor. The 160 desktops are successfully being migrated to the notebooks or thin clients of the field workers and the helpdesk, using the centralized infrastructure. The long-term benefits of VDI lie in the lower costs and the less cumbersome, centralized administration needed when it comes to the setup, updating and maintenance of these virtual desktops.

"On the storage side, the VDI and the virtual servers are supported by DataCore’s round-the-clock, failsafe storage infrastructure and performance has been enhanced by intelligent caching, fulfilling all our expectations," comments Sartin. "By using the DataCore storage hypervisor, we were able to integrate a technically complex solution with a universal range of services to meet the short-term performance and high-availability requirements of our VDI needs. In addition, the integrated migration and replication features have created the basis for efficiently implementing the planned model we need for disaster recovery."

Flexible Infrastructure for Cloud Services
The next large-scale project to be concluded by year-end 2012 is dividing and synchronizing the existing systems between the computer centers in Bolzano and Trento so that operations can continue at one location in the event of a catastrophe.

"As is the case in other industries, business continuity is an absolute necessity for us. By making use of the DataCore solution within the virtual infrastructure created by VMware vSphere and VDI, we cannot only ensure that we meet these corporate requirements, but also guarantee optimal cost efficiency as a result of the hardware independence of the solution. This affects both the direct investment and the indirect and long-term cost of refreshes, expansions and added hardware acquisitions. We have thus created the technical basis for our external IT services, and within this framework we are creating the most flexible and varied range of cloud services possible," concludes Brennercom CEO, Dr. Karl Manfredi.

To read more regarding this deployment, please access a complete case study concerning DataCore’s implementation at Brennercom: Brennercom SpA Case Study.

Tuesday 20 November 2012

The Red Cross Embraces DataCore Software's SANsymphony-V to Optimize Online Analytical Processing Performance

Storage Hypervisor boosts charity's data mining speed by 300 Percent

http://finance.yahoo.com/news/red-cross-embraces-datacore-softwares-120000196.html

DataCore Software today announced that the British Red Cross Society has deployed the SANsymphony™-V Storage Hypervisor to provide a significant performance acceleration on its new Online Analytical Processing (OLAP) system, dramatically shortening response times and increasing the reliability of data extraction. The performance improvements were achieved by installing SANsymphony-V on an HP Proliant DL 370 server. The DataCore™ software has reduced the time window needed to perform the Extract, Transform and Load (ETL) operations from an average of 12 hours down to four, with the load spread across half the original number of internal hard disk drives achieved through the efficiency of SANsymphony-V's thin provisioning.

The British Red Cross Society is the United Kingdom's registered charity arm of the worldwide humanitarian organization, the International Red Cross. Formed in 1870, the Red Cross has over 31,000 volunteers and 3,300 staff providing assistance and aid to all people in crisis, both in the UK and overseas, without discrimination and regardless of their ethnic origin, nationality or religion.

"In order to sustain the Charity's considerable ongoing work worldwide, the Red Cross needs to continually generate additional income from new and existing donors," said Kevin Bush, technical architect for the Charity's MIS Enterprise Architecture Team in London. "It is our function in MIS to ensure the relevant departmental units have the appropriate infrastructure available to allow them to complete automated processes in time to fulfil marketing campaigns to drive further donations."

To help facilitate ongoing fundraising, a new suite of hardware and business intelligence tools were deployed six months ago for the British Red Cross utilizing OLAP - an approach that swiftly answers multi-dimensional analytical queries through accurate Business Intelligence (BI) tools deployed on British Red Cross' SQL Server database. BI data marts are created to track behavioral changes, creating campaign relevancy trends for business units. This level of data profiling, specifying individual campaigns with matched targets, entails significant I/O (Input/Output) processing demands and depends on a stable, optimized infrastructure.

Working in conjunction with the MIS Enterprise Architecture Team, the British Red Cross's partner, Adapto, recommended that deploying DataCore's SANsymphony-V software would significantly decrease I/O strain and dramatically increase performance in a cost effective, non-invasive way. The SANsymphony-V storage hypervisor could dramatically improve performance levels by increasing the speed of read/write requests across the entire British Red Cross storage infrastructure using the storage server memory as the caching engine. This caching could dramatically accelerate application response times, manifesting in a dramatic increase in the speed of database queries and data extraction for the business units.

Critical to the effectiveness of the Extract/Transform and Load (ETL) from the database is achieving ongoing consistency within a predefined extraction window. The speed of I/O to process workloads determines these two factors; a slow I/O equates to a long and erratic extraction window. In practice, prior to the performance caching gains, each ETL was taking between nine and 15 hours, being set to run overnight with the resultant data marts ready in time for the next working day.

Following Adapto's suggestion, Kevin downloaded the easy to install SANsymphony-V test drive and right away ran a test ETL that displayed immediate benefits through DataCore's mega caching ability, with the software recognizing I/O patterns to anticipate which blocks to read next into RAM from the back-end disks. Requests became fulfilled quickly from memory at electronic speeds, eliminating the delay associated with the physical disk I/O. The findings were impressive. The production-ready, easy to use GUI allowed the ETL to perform at a blistering pace, similar to that achieved by SSD but without the associated cost overheads. This manifested in a shorter four hour query extraction timeframe.

"From the point of evaluation onwards, we haven't looked back with SANsymphony-V," said Bush. "It's caching and performance acceleration has certainly addressed the consistency of extraction, whilst reducing the window to an acceptable level, so that as a Charity, we can concentrate on effective fundraising to help those most in need. We are so impressed that we are now looking at installing another node of SANsymphony-V for high availability and mirroring."

Monday 19 November 2012

Storage hypervisor: Storage's future? It' the software that matters: Software-defined Storage; Storage Virtualization

By now you may have heard the term "storage hypervisor." You probably don't know exactly what it means, but that isn't your fault. Vendors that use the term to describe their products disagree on the exact meaning, although they mostly agree on why such a technology is useful.


A vendor panel at the Storage Networking World (SNW) show in Santa Clara, Calif., last month set out to define storage hypervisor. The represented vendors sell different types of products, though. The panel included array-based virtualization vendor Hitachi Data Systems Corp., network-based storage virtualization vendor IBM, software SAN virtualization vendor DataCore Software Corp. and virtual machine storage management vendor Virsto Software Corp.

 Can all of these vendors' products be storage hypervisors? It's more accurate to say that, taken together, the storage hypervisor products make up an overview of storage virtualization under a new name. And that new name is already giving way to a newer term. "Software-defined storage" was used interchangeably with "storage hypervisor" during the SNW panel.

 Software-defined storage is no better defined than storage hypervisor, but it includes the "software-defined" phrase taking over the data center and networking these days.

 DataCore Software Corp. CEO George Teixeira said his company was ahead of the current trend when it started back in the 20th century with the premise that software gives storage its value.

"Today we have fancy terms for it like software-defined storage, but we started DataCore in 1998 with a very basic [PowerPoint] slide that said, 'It's the software that matters, stupid,'" Teixiera said. "And we've seen storage from the standpoint of really being a software design."

 Teixiera said any talk of a storage hypervisor must focus on software.

"Can you download it and run it? And beyond that, it should allow users to solve a huge economic problem because the hardware is interchangeable underneath," he said. "Storage is no longer mechanical drives. Storage is also located in flash. Your architecture can incorporate all the latest changes, whether it's flash memory or new kinds of storage devices. When you have software defining it, you really don't care.

 "Just like with VMware today," said Teixiera, you really don't care whether it's Intel, HP, Dell or IBM servers underneath. Why should you care about the underlying storage?"

 Read more at: http://searchvirtualstorage.techtarget.com/Storage-hypervisor-Hypothetical-or-storages-future

Friday 16 November 2012

See the latest In-depth Product Reviews on SANsymphony-V

Most recent: SANsymphony-V R9.0 Product Review by NT4ADMINS Magazine

Greater scalability, improved administration functions and close integration with vSphere environments and system management suites are the main core characteristics of Release 9 of the SANsymphony™-V storage hypervisor. Above all, the 'group operations' make life easier for the administrator:

Wednesday 14 November 2012

Set your data free with Dell Fluid Data™ and DataCore SANsymphony-V

When businesses change, whether in response to a new opportunity or a competitive challenge, the applications and the data they depend on have to change too. That can be really hard with legacy storage solutions, whose rigid boundaries tend to hold data captive. This is especially true if, as is often the case, the storage infrastructure has been built up over time out of various “point solutions.” This creates inefficient data silos that make it hard to optimize the match between storage capabilities and application needs or take advantage of new hardware capabilities. Availability and disaster recovery capabilities can suffer as well.

The Fluid Data™ architecture from Dell is designed to overcome these storage challenges by making data as dynamic as the businesses that depend upon it. DataCore is a long-time Dell ISV partner, and we’ve been working with our reseller partners around the world to help our customers realize the benefits of Fluid Data. “We have thousands of DataCore storage hypervisor customers using Dell storage platforms,” says Carlos Carreras, DataCore’s Vice President of Alliances & Business Development. “We see many DataCore partners like The Mirazon Group and Sanity Solutions penetrating non-Dell accounts and leveraging SANsymphony-V to make it easier for customers to meet their storage needs with Dell solutions.”

The DataCore SANsymphony-V storage hypervisor lets Dell resellers seamlessly harness the Dell Fluid Data architecture and its wide range of products to address the storage appetite of their customers, including platforms such as Compellent, EqualLogic, and the PowerVault MD Series. Customers can add these cost-effective Dell solutions to their storage portfolio without a forklift upgrade, preserving their storage investments and prolonging the useful life of existing storage (e.g., moving it down-tier) while leveraging the power of Fluid Data for increased storage efficiency and performance. The DataCore storage hypervisor and enterprise-wide auto-tiering makes it easy to penetrate and refresh existing storage installations and add new Dell storage to modernize the infrastructure and lower overall costs. With the SANsymphony Cloud Gateway, customers can even add popular public cloud hosting services as a low-cost tier in their storage strategy. With DataCore and Dell, customers get infrastructure-wide storage management and the compelling benefits of Fluid Data across all their storage investments.

For DataCore partners, the new DataCore SANsymphony-V Migration Suite makes it easy to introduce new customers to Dell storage with completely non-disruptive data migration. The suite enables a DataCore partner to set up a temporary dual-node SANsymphony-V installation that can turn a hardware refresh into a zero-impact process. A pass-through architecture assures that the customer’s environment remains “hot” the entire time. Users never even know a migration has taken place.

“In customer meetings, I am often met with skepticism that there is no way to do an easy migration without a lot of disruption. After they see the power of DataCore storage virtualization software in action, their jaws literally drop because they cannot believe that it can be that simple to migrate their storage and VMs,” said Barry Martin, partner and chief technology officer at The Mirazon Group.

Barry also notes that the heat map recently introduced in SANsymphony-V 9.0 is an especially powerful analytical tool. “While the migration suite is in place, you could show the customer all their storage I/O ‘hot spots,’ and where, for instance, a SSD tier could boost the performance of critical applications. Being able to give that kind of strategic advice is key to our business success, and the visual impact makes it all the more powerful.”

These and other features make SANsymphony-V a natural complement to the Dell Fluid Data architecture. You can start learning more about SANsymphony-V here, or check out case studies in a variety of industries and applications to see how the DataCore storage hypervisor can go to work for you.

SEE DATACORE SOFTWARE AT THIS UPCOMING DELL EVENT

Dell Storage Forum Paris 2012

DataCore will be a Petabyte Sponsor at the upcoming Dell Storage Forum.

The event takes place 14-16 November 2012 in Paris. Address follows:

Marriott Rive Gauche Hotel & Conference Center
17 Boulevard Saint Jacques
Paris, 75014
France

Description
This is a channel partner and an end-user focused event. DataCore will present the newest version of its storage hypervisor – SANsymphony-V 9.0.

For more information on the show, visit Dell Storage Forum Paris 2012.

Monday 12 November 2012

SC12 Supercomputing Conference: Storage technology leaders Fusion-io and DataCore Software team up to showcase new joint solution for data-intensive, HPC applications

DataCore Software Featured in Fusion-io Booth #2012 at SC12 Conference

DataCore Software, the storage hypervisor leader and premier provider of storage virtualization software, invites attendees of SC12, the international conference for high performance computing (HPC), networking, storage and analysis, to explore innovative new ways to take advantage of Fusion-io flash memory technologies in data-intensive environments. DataCore will be exhibiting in Fusion-io booth #2201 at the Salt Palace Convention Center in Salt Lake City, Utah, November 12-15, 2012.

DataCore will showcase its SANsymphony™-V storage hypervisor integrated with the Fusion ioMemory platform to meet the large scale, low-latency needs of HPC applications common to many SC12 visitors. Attendees will learn how DataCore applies state-of-the-art auto-tiering technology to dynamically distribute I/O workloads between blazing fast Fusion-io flash memory and conventional high-density disk farms for an optimal price/performance balance.

Experts will also be on hand to give advice on how to eliminate crippling single points of failure by using the SANsymphony-V software to mirror data between redundant, multi-tier storage pools.

Fusion-io products are well known for accelerating databases, cloud computing, big data and HPC applications in a variety of industries, including e-commerce, social media, finance, government and telecommunications. Combined with DataCore’s™ storage hypervisor, customers not only enjoy higher performance and availability, but also superior flexibility and exceptional value from their storage investments.

“High performance computing requires applications to process data at speeds that transform data into discovery,” said Tyler Smith, Fusion-io vice president of alliances. “Like other data-driven webscale and enterprise organizations, HPC innovators are also cost-conscious and mindful of data protection. Our collaboration with DataCore Software provides a powerful integrated solution that ensures data and applications are available and ready to efficiently deliver peak performance.”

“SC12 provides a fantastic backdrop to convey the joint value resulting from DataCore’s long-standing relationship with Fusion-io. We are seeing great results with many customers leveraging our combined hardware and software capabilities as the centerpiece for their most demanding workloads,” adds Carlos Carreras, vice president of alliances and business development
at DataCore Software.

SC12 is the premier international conference for high-performance computing, networking, storage and analysis. The conference is expecting 10,000 attendees representing more than 50 countries and 366 exhibitors. Exhibits and technical presentations at SC12 will offer a look at the state-of-the art solutions in high performance computing and a glimpse of the future.
  • What: DataCore and Fusion-io demonstrations at SC12 Conference
  • Where: Booth 2201, Salt Palace Convention Center, Salt Lake City, Utah
  • When: November 12-15, 2012 

Network Computing Review: DataCore's Storage Hypervisor - An Overview & Customer Use Cases –New Release Features

Network Computing: DataCore's Storage Hypervisor - An Overview –Part 1: New Release Features

By David Hill, David Hill is an IT Author and Chief Analyst and Principal of Mesabi Group LLC. DataCore Software is not a client of David Hill and the Mesabi Group.

A storage hypervisor is an emerging term used by some vendors to describe their approach to storage virtualization. Several companies offer storage hypervisors, including IBM, Virsto and DataCore. I've already written about IBM and Virsto in previous blogs.

Now it's DataCore's turn. DataCore is an independent software vendor (ISV), so it has no financial interest in selling the underlying storage hardware. It supports both virtualized servers and traditional physical hosts and legacy storage with the same feature stack and automation. DataCore's storage hypervisor is a software product called the SANsymphony-V. This blog will examine some enhanced and new features of the version 9 release.

Auto-tiering Auto-tiering is a "hot" topic (pun intended!) with not only tier-0 solid state devices, but also performance (SAS or FC) hard disk drives, capacity (SATA), and archived storage that can even be rented from public cloud providers at a distance. This feature also includes automatic tuning that creates heat maps to reveal heavy disk activity, so that the hottest data gets the most attention (in order to meet performance service level requirements). It also automates load balancing across the available disk resources.


Network Computing: DataCore's Storage Hypervisor - An Overview –Part 2: Two Customer Use Cases

Host.net is a service provider that offers VM and enterprise storage platforms in multiple virtual private data centers (i.e., Host.net hosts customer compute and storage resources at its data centers) that are all connected to a Cisco-based10Gbps multinational backbone. Among the many services the company offers are virtual enterprise servers, storage, backup/restore, disaster recovery and colocation.

DataCore is at the heart of Host.net's enterprise SAN storage platform. Host.net believes DataCore offers the necessary performance and data integrity (every byte of data is written twice within a synchronous mirror) at a competitive price. Among the things Host.net likes about DataCore are hardware independence (for example, in a SAN hardware refresh it can add and migrate data on the fly with no downtime), operating system independence and robust I/O performance, as DataCore's use of hundreds of gigabytes of high-speed cache essentially turns a traditional SAN into a high-speed hybrid solid-state SAN at a fraction of the cost.

X-IO (formerly Xiotech) builds hardware with its Hyper ISE (Intelligent Storage Elements) storage system. With a great deal of engineering experience and innovation, the goal is to deliver high performance to accelerate enterprise applications at good price/performance level. However, X-IO has decided to shed itself of the storage and data management software (such as snapshot and replication software) that typically characterizes enterprise-class storage.

But customers still need storage and data management software. DataCore comes provides those capabilities in X-IO products. As a result, X-IO can take a hardware-intensive focus and improve price/performance while DataCore picks up the slack.

Storage virtualization solutions help make the most of virtualization

Good recent article worth to share: GARTNER analyst recently spoke on why Storage Virtualization solutions make the most of virtualization, SSDs, Auto-tiering http://searchvirtualstorage.techtarget.com/news/2240169260/Storage-virtualization-solutions-help-make-the-most-of-virtualization

...Server virtualization allows much higher rates of system usage, but the resulting increases in network traffic pose significant challenges for enterprise storage. The simple "single server, single network port" paradigm has largely been displaced by servers running multiple workloads and using numerous network ports for communication, resiliency and storage traffic.
Virtual workloads are also stressing storage for tasks, including desktop instances, backups, disaster recovery (DR), and test and development.
At Gartner Symposium/ITxpo recently, Stanley Zaffos, a Gartner research vice president, outlined the implications of server virtualization on storage and explained how storage virtualization solutions, the right approach, and the proper tool set can help organizations mitigate the impact on enterprise storage.
Consider using storage virtualization. Gartner's Zaffos urges organizations to deploy storage virtualization as a means of better storage practice, and he underscores core benefits of the technology:
  • Storage virtualization supports storage consolidation/pooling, allowing all storage to be "seen" and treated as a single resource. This avoids orphaned storage, improves storage utilization and mitigates storage costs by reducing the need for new storage purchases. The benefits of storage consolidation increase with the amount of storage being managed.
  • Storage virtualization supports agile and thin provisioning, allowing organizations to create larger logical storage areas than the actual disk space allocated. This also reduces storage costs because a business does not need to purchase all of the physical storage up front -- simply add more storage as the allocated space fills up. Later tools may allow dynamic provisioning where the logical volume size can be scaled up or down on demand. Management and capacity planning is important here.
  • Storage virtualization supports quality of service (QoS) features that enhance storage functions. For example, auto-tiering can automatically move data from faster and more expensive storage to slower and less expensive storage (and back) based on access patterns. Another feature is prioritization, where some data is given I/O priority over other data.
Consider using solid-state drives (SSDs). One of the gating issues for storage is the lag time caused by mechanical delays that are unavoidable in conventional hard-disk technologies. This limits storage performance, and the effects are exacerbated for virtual infrastructures where I/O streams are randomly mixed together and funneled across the network to the storage array, creating lots of disk activity. Storage architects often opt to create large disk groups. By including many spindles in the same group, the mechanical delays are effectively spread out and minimized because one disk is writing/reading a portion of the data while other disks are seeking. Zaffos points to SSDs as a means of reducing spindle count and supplying much higher IOPS for storage tasks.
Plan the move to virtualization carefully. Data center architects must develop a vision of their infrastructure and operation as they embrace virtualization. Zaffos suggested IT professionals start by identifying and quantifying the impact server virtualization, data growth and the need for 24/7 operation will have on the storage infrastructure and services.
Next, determine what you actually need to accomplish and align storage services with the operational abilities and physical infrastructure. For example, if you need to emphasize backup/restoration capabilities, support data analytics, or handle desktop virtualization, it's important to be sure that the infrastructure can support those needs. If not, you may need to upgrade or make architectural changes to support those capabilities.
When making decisions for virtualization, Zaffos notes the difference between strategic and tactical issues. Strategic decisions create lock-in, and tactical decisions yield short-term benefits. For example, the move to thin provisioning is a tactical decision, but the choice to use replication like SRDF would be a strategic decision.
...Ultimately, Zaffos notes that storage virtualization solutions can be a key enabling technology for server and desktop virtualization -- both of which place extreme demands on the storage infrastructure. But, he said, the move to storage virtualization takes a thorough understanding of the benefits, careful planning to ensure proper alignment with business and technical needs, and judicious use of storage technologies like tiers and SSD.

Tuesday 30 October 2012

DataCore is a Platinum Sponsor and Keynote presenter at SNW Europe, Datacenter & Virtualization World 2012; Learn What’s New and Why Thousands of Customers Choose to Run their Business with DataCore Software

“Thousands of customers throughout Europe have already realized the compelling performance and productivity advantage of DataCore™ SANsymphony-V,“ states Christian Hagen, Vice President and Managing Director EMEA. "The DataCore storage hypervisor greatly improves the economics and harnesses the full power of server caches, solid state disks (SSDs) and existing storage assets so that application owners no longer need to 'rip and replace' storage infrastructures and pay much higher costs to meet their performance and uptime objectives."

Stop by Stand 2 and learn How to Run Faster Virtualized: By Eliminating the I/O Bottlenecks in Clouds and Virtualized Data Centers
30. Oct, 11:20 – 11:50:
Keynote: How new storage solutions enable organizations to reinvent themselves
Christian Hagen, Vice President EMEA & Managing Director, DataCore Software
30. Oct, 11:55 and 15:20 + 31. Oct. 11:20:
Hands-On-Labs: How to configure your SAN with the one and only true Storage Hypervisor SANsymphony-V
Presented by Christian Marczinke, Director Strategic Systems Engineering & Chief Solutions Architect EMEA


30. Oct, 14:40 - 15:00:
Focus Session: Speeding the Transition to a Responsive, Virtualized Storage Infrastructure
Alexander Best, Director Technical Business Development EMEA, DataCore Software

31. Oct, 10:15 – 10:50:
Vendor Updates: SANsymphony™- V 9.0 – What's New in the Storage Hypervisor for the Enterprise
Alexander Best, Director Technical Business Development EMEA, DataCore Software

SNW Europe 2012: DataCore Software Presents 'What's New in the Storage Hypervisor for the Enterprise' and Powers Cloud and Business Applications to Run Faster Virtualized

DataCore is a Platinum Sponsor and Keynote presenter at SNW Europe, Datacenter Technologies & Virtualization World 2012; Stop by Booth #2 and Learn What’s New and Why Thousands of Customers Choose to Run their Business with DataCore Software
 


Today at the SNW Europe, Datacenter Technologies & Virtualization World 2012 event, DataCore Software will showcase many of the powerful storage management capabilities of SANsymphony™-V 9.0, "The Storage Hypervisor for the Cloud" including ‘Heat maps’ and tools to pinpoint storage bottlenecks and optimize storage pool management, high availability stretch-site mirroring, automatic Continuous Data Protection features for fast application recovery and enterprise-wide flash SSD storage auto-tiering and adaptive caching capabilities that significantly boost the speed, throughput and availability of virtualized, I/O intensive business applications like SAP, Oracle, Microsoft SQL Server, Microsoft SharePoint and Microsoft Exchange. Stop by booth #2 under the motto "DataCore Software: Elevator to the Cloud" and find out why thousands of customers report significantly faster performance and better than 99.999% uptime after virtualizing their existing storage with SANsymphony-V.

“Thousands of customers throughout Europe have already realized the compelling performance and productivity advantage of DataCore™ SANsymphony-V,“ states Christian Hagen, Vice President and Managing Director EMEA. "The DataCore storage hypervisor greatly improves the economics and harnesses the full power of server caches, solid state disks (SSDs) and existing storage assets so that application owners no longer need to 'rip and replace' storage infrastructures and pay much higher costs to meet their performance and uptime objectives."

Run Faster Virtualized: By Eliminating the I/O Bottlenecks in Clouds and Virtualized Data Centers
"DataCore's impact on performance was dramatic in every metric we measured. Even more impressive is how SANsymphony-V simplifies management and how easily it can make data center storage more resilient. With a single mouse click disk capacity is served and all the normal error-prone steps to configure, tune and set best paths for high availability get done auto-magically," said Tony Palmer, senior engineer and analyst with Enterprise Strategy Group Lab.

In the ESG Lab Validation report, the benchmark tests confirmed that Microsoft SQL Server and Exchange workloads were able to improve their performance by nearly 5x as compared to running the same workloads on non-virtualized physical servers.

To further increase tier 1 business critical application responsiveness, companies often spend excessively on flash memory-based SSDs. SANsymphony-V's auto-tiering and adaptive caching feature optimize performance and the use of these premium-priced flash resources alongside more modestly priced, higher capacity disk drives. SANsymphony-V constantly monitors I/O behavior and intelligently auto-selects between server memory caches, flash storage and traditional disk resources in real-time to ensure that the most suitable class or tier of storage device is assigned to each workload based on priorities and urgency..

Keynote Presentation, Hands-on-lab demos and ‘What’s new and important to know’ sessions

At SNW Europe DataCore will demonstrate functional versatility and performance scalability of its hypervisor under the motto "Elevator to the Cloud" at booth 2 with its distribution partner ADN and in the Hands-On-Lab. The conference program of the Platinum sponsor is complemented by a key note of Vice President and Managing Director EMEA Christian Hagen and further presentations and demos for the architecture of dynamic virtual storage infrastructures.

Program Agenda:

October 30th: 11:20 – 11:50: Keynote:

How new storage solutions enable organizations to reinvent themselves
Presented by Christian Hagen, Vice President EMEA & Managing Director, DataCore Software

October 30th: 14:40 - 15:00: Focus Session:
Speeding the Transition to a Responsive, Virtualized Storage Infrastructure
Presented by Alexander Best, Director Technical Business Development EMEA, DataCore Software

October 31st: 10:15 – 10:50: Vendor Updates:
SANsymphony™- V 9.0 – What's New in the Storage Hypervisor for the Enterprise
Presented by Alexander Best, Director Technical Business Development EMEA, DataCore Software

Hands-On-Labs: October 30 – 11:55 - 12:55 & 15:20 - 16:20 and October 31 – 11:20 - 12:20:
How to configure your SAN with the one and only true Storage Hypervisor SANsymphony-V
Presented by Christian Marczinke, Director Strategic Systems Engineering & Chief Solutions Architect EMEA

Additional DataCore Programs and Features being showcased:

DataCore at the event will also spotlight:
  • New system builder partners and DataCore’s commitment to working with partners to build an ecosystem of appliance-focused, value-add versions of its storage hypervisor that meet their individual customer needs.
  • New Cloud Service Providers and hosters that are using SANsymphony-V under the DataCore Cloud Service Provider Program.
  • A SANsymphony-V 9.0.1 update release this month to enable support for Windows Server 2012 applications hosts and a follow-up update release expected early next year to support running SANsymphony-V on Windows Server 2012.
  • How to empower VMware SRM and VAAI benefits across heterogeneous storage arrays and the announcement of an update release of the vSphere plug-in that supports storage reclamation and enables VMware administrators to control and schedule SANsymphony-V services, provisioning, taking snapshots and tasks directly from their VMware vCenter Server Management Platform.
  • New simpler to use and time saving recovery with Continuous Data Protection (CDP) to rapidly rollback in time and recover critical Tier 1 business workloads and VMs.
  • A sneak preview of partner-integrated ‘datacenter in a box’ unified SAN/NAS storage systems and a pre-packaged Virtual Desktop Server reference architecture that features the cost and performance advantages of running Microsoft Hyper-V and DataCore SANsymphony-V co-resident on the same platform.

Test Drive the DataCore Storage Hypervisor; Free License Key of SANsymphony-V

Download a 30 Day Free Trial download of SANsymphony-V at: www.datacore.com/Software/Closer-Look/Demos.aspx

Monday 29 October 2012

DataCore Software Awarded “Innovation Leader of the Year” at CIO Summit 2012


DataCore Software has been awarded “Innovation Leader of the Year” by CIO Europe Summit 2012. IDC analysts and C-Level executives of over 60 European IT departments who attended the summit selected DataCore based on its latest release of its storage software SANsymphony-V 9.0 – the storage hypervisor for the Cloud optimized for private clouds, cloud service providers and large scale data centres.

“CIO Europe Summit Frankfurt 2012 gathered over 60 C-level executives from the CIO industry across Europe. It is the arena for senior level executives to engage in focused dialogue with their peers, examine management objectives and meet with the solution providers who can best meet their needs,” said Marc Baker, EMEA CIO Summit Director at GDS International. “One of the most impressive participants at the event in 2012 was DataCore. A large number of the CIOs and delegates were really impressed with the workshop and one-to-one meetings. Huge congratulations to DataCore for being awarded ‘Innovation Leader of the Year’ following the release of SANsymphony-V 9.0.”
Introduced in June 2012, DataCore’s SANsymphony™-V 9.0 offers customers superior flexibility, powerful automation and exceptional value. It is transforming the economics of virtualization for organizations of all sizes worldwide, by delivering flexibility, performance, value and scale, regardless of the storage hardware they use. "The Storage Hypervisor for the Cloud" optimizes storage pool management, high availability stretch-site mirroring and automatic continuous data protection features for fast application recovery. SANsymphony-V 9.0 also features enterprise-wide flash SSD storage auto-tiering and adaptive caching capabilities that significantly boost the speed, throughput and availability of virtualized business applications like SAP, Oracle, Microsoft SQL Server, Microsoft SharePoint and Microsoft Exchange.

“We are very pleased with the CIO Summit 2012 Award as it witnesses the acceptance and market footprint of our storage hypervisor technology in larger scale enterprises. Thousands of customers throughout Europe have already realized the compelling performance and productivity advantage of DataCore™ SANsymphony-V,” said Stefan von Dreusche, Sales Director EMEA, Central Europe Region. “The DataCore storage hypervisor greatly improves the economics of new and existing storage assets so that application owners no longer need to 'rip and replace' storage infrastructures and pay much higher costs to meet their performance and uptime objectives."

At the CIO Summit in October 2012, 65 of Europe's most influential CIOs attended the eighth Chief Information Officer Europe Summit (CIO EU 8) to strategize on IT issues and share best practices and key subject presentations. The CIOs and the event’s analyst partner IDC presented and focused on managing data challenges, the ubiquitous question of cloud computing and the array of regulatory and security requirements surrounding it. https://twitter.com/cioeurope / http://pic.twitter.com/xHdhqkAd

Friday 26 October 2012

DataCore on why it's the last man standing in storage virtualization

By Chris Mellor
Read the full article: http://www.theregister.co.uk/2012/10/24/datacore_picture/

There is only one viable software SAN virtualization product available today and that is SANsymphony-V.”

What about convergence, virtual storage appliances, flash-accelerated servers, and the cloud? DataCore will tweak, tune, optimise and develop SANsymphony to take account of them all and deliver what it has been focused on all along; a virtualized pool of SAN storage delivering data to servers in a fast and protected manner."

Why is DataCore the only dedicated software SAN virtualisation vendor left standing? The answer lies in a skewed revenue geographic model, staff majority ownership of the company and a fair amount of luck.

It's also down to a solid product, though other companies with great products have failed where DataCore did not.

DataCore has absorbed $100m worth of funding and developed nine generations of storage virtualisation software – which it calls storage hypervisors – while other software storage virtualisation developers have withered and died in the face of storage array and server vendor selling their own storage virtualisation software paired with hardware; IBM's SAN Volume Controller (SVC) for example. How has DataCore managed to prosper - it has prospered - and survive in the face of such competition? The answer lies in the unique nature of the company and its founders.

DataCore was founded in 1998 with the realisation that a SAN disk drive array was, or could be, an X86 server running drive array controller code, using its attached storage and presenting as virtual disks of networked, block-access storage. Customers could buy standard servers, provision them with commodity disk drives, and so have freedom of choice rather than being restricted to storage array manufacturers' disk drive prices, often quite high, software licensing and functionality. They can also buy external storage arrays and have DataCore's SANsymphony software control them too. Thus DataCore helped pioneer the SAN virtualisation appliance idea.

SANsymphony, a storage hypervisor as DataCore views it, and its associated products support a variety of server operating systems and hypervisors, and provide modern SAN array features such as virtualisation, high-availability and thin provisioning. It can run in a dedicated server or as a virtual machine and there are more than 6,000 DataCore customers with more than 20,000 software licenses bought.

Why is DataCore the last man standing?

Firstly, the product is good; it does what it says it does on the box...

Read the full 3 page article: http://www.theregister.co.uk/2012/10/24/datacore_picture/

Video: George Teixeira, CEO of DataCore Software talks about Storage Virtualization and DataCore

In this Video, during a recent visit to DataCore’s HQ, George Teixeira (CEO, president and co-founder) talks about his company and the relationship between DataCore and the business in Europe.
http://juku.it/en/articles/video-george-teixeira-talks-datacore.html


George Teixeira talks DataCore from Juku on Vimeo.

Thursday 25 October 2012

CIO Summit Presents 'Innovation Leader of the Year' Award to DataCore for SANsymphony-V 9.0 Storage Hypervisor

CIO Summit: DataCore was presented the 'Innovation Leader of the Year' Award for SANsymphony-V 9.0 Storage Hypervisor.

At the CIO Summit this week, 65 of Europe's most influential CIOs attended the eighth Chief Information Officer Europe Summit (CIO EU 8) to strategize on IT issues and share best practices and key subject presentations. The CIOs and the analyst firm IDC presented and focused on managing data challenges, the ubiquitous question of cloud computing and the array of regulatory and security requirements surrounding it.
Huge Congratulations to @DataCore awarded Innovation Leader of the Year following the release of SANsymphony(tm)-V 9.0 #cioeu

Tuesday 23 October 2012

Law firms gain peace of mind and more with DataCore storage hypervisor


The practice of law makes stringent demands on storage technology, most notably in terms of availability, disaster recovery, and efficiency. A modern law firm’s income—based on hourly billing--depends on constant access to critical applications for case management, e-discovery and litigation support, billing, document management, and more. Both the application servers and the storage underlying these applications must be able to run non-stop. Preservation of the firm’s data, and perhaps more important, that of their clients, against any threat from user error to a hurricane, is of course a top priority. And unless data storage assets are used efficiently, the growing volume of data involved in even a small-to-medium-sized legal practice--encompassing anything from PDF contracts to multi-gigabyte video depositions—can quickly overwhelm any reasonable storage budget.

Read More: Rennert Vogel Mandler & Rodriguez, P.A. and Stikeman-Elliott share their DataCore experience...

Saturday 20 October 2012

Storage Strategies Now Analyst Report: Virtualizing business-critical applications without hesitation using the DataCore SANsymphony™-V storage hypervisor

"DataCore practically invented the concept of storage virtualization and has the years of experience in the field across thousands of customers and multiple generations of its product to claim a leadership position in the space now known as software-defined storage infrastructure. With SANsymphony-V R9, this experience is embodied in the comprehensive functionality and scalability of the product. The benefits it yields with regards to performance and availability are even more pronounced in scenarios where business-critical (Tier 1) applications must be virtualized and consolidated."

Read the full snapshot report by James E. Bagley, Senior Analyst & Deni Connor, Founding Analyst
Storage Strategies NOW Snapshot Report:
The virtualization and consolidation of business-critical applications is a high priority for IT operations in organizations of all sizes. But the owners of these applications often balk at virtualization because of a set of unknowns that surround the loss of dedicated server hardware.

The truth is that applications perform differently in a virtualized environment as opposed to dedicated server hardware. Virtualization causes contention for shared storage resources and the performance of formerly well-behaved applications can become unpredictable. When performance becomes erratic and response times suffer, users grumble and application owners want their physical machines back. Storage equipment outages for routine maintenance, upgrades and expansion compound the problem because many virtual machines rely on those same resources.