Translate

Monday 11 January 2010

DataCore adds support for logical volumes up to 1 PB and the Asymmetric Logical Unit Access (ALUA) standard

http://itknowledgeexchange.techtarget.com/storage-soup/datacore-adds-support-for-logical-volumes-up-to-1-pb/

DataCore kicked off 2010 with updates to its SANSymphony and SANMelody storage virtualization software, adding support for logical volumes up to 1 PB and the Asymmetric Logical Unit Access (ALUA) standard...

DataCore director of product marketing Augie Gonzalez said that as with last year’s 1 TB “mega-cache” support, the logical limit is beyond where most customers will be looking to stretch today. But the previous 2 TB limit had grown impractical for making RAID sets out of the latest 1 TB and 2 TB SATA disks.

“The logical volume expansion and thin provisioning allows users to say, ‘I don’t care how big the volume will be in the future’,” Gonzalez said. “Rather than defining LUNs up front and then having to make changes later, you can immediately set up a large volume and expand the storage with no applicvation or infrastructure changes.”

A DataCore service provider customers says adding ALUA support will improve management in his storage environment. Joseph Stedler, director of data center engineering for cloud computing and managed IT service provider OS33, said he uses DataCore’s SANSymphony software to host back-end storage for his SMB customers. Right now SANSymphony is running on IBM System x servers in front of IBM DS3400 arrays and Xiotech Emprise 5000 storage devices, mirroring between redundant sets of the tiered hardware. The logical volume expansion will be especially helpful in cutting down on backup administration overhead, Stedler said.

“With two terabyte volumes, we had to present things in 2 terabyte chunks to our Veeam [backup] server,” he said. “With a larger primary volume we could have fewer backup targets to manage.”

Stedler said the addition of ALUA support will be even more important for creating multipath I/O in OS33’s VMware environment. “ALUA solves a major pain for everybody running DataCore with VMware,” he said. “The way VMware understood it before was active-passive only. DataCore was able to do active-active failover but with VMware you’d have to run multipathing with the most recently used path. The new release is fully compliant with ALUA, so VMware can view it as active-active.”

Stedler said he’s looking forward to the addition of more granular scripting capabilities for the software in future releases.

Thursday 7 January 2010

DataCore virtual storage area network meets backup, disaster recovery needs for Legal Firm

http://searchservervirtualization.techtarget.com/tip/0,289483,sid94_gci1373949_mem1,00.html?ShortReg=1&mboxConv=searchServerVirtualization_RegActivate_Submit&

With about 3 TB of virtualized server data and another 2 TB of email and database data that needed to be backed up daily, Canada-based law firm Stikeman Elliott LLP faced a growing problem. Virtual storage area network backups were taking 24 to 48 hours, and not all of the data was getting backed up properly.

What was needed, said Marco Magini, a network system specialist for the firm, was an almost instantaneous backup system. The firm chose DataCore Software Corp.'s SANmelody software, a virtual storage area network (SAN) that's installed on one or two x86 servers. The servers become virtual storage controllers for large arrays of physical and virtual storage disks. Those disks are then moved to existing networks to send data to application servers, according to DataCore.

SANmelody solved several other IT challenges, including disaster recovery shortcomings and high-availability needs for virtualized servers, Magini said. "I was not looking for storage virtualization. We had plenty of storage to fill our needs." But once the application was installed, we uncovered a host of new unexpected capabilities, he said.

The 1,200-employee law firm is using the product in its Montreal headquarters and will roll it out to six other offices around the globe.

Magini said he's still finding new ways to get performance gains using SANmelody. "We started small," he said. "It's not that you don't have faith in the products, but when you are moving business-critical data, you want to be sure it can handle it."

SANmelody also allowed the IT department to make use of all types of unused legacy disks. When added to the system, SANmelody's management system views them as one massive disk for storage. That allowed Magini to reuse about 50 old 72 GB drives that were sitting on a shelf.

Mark Bowker, an analyst with Enterprise Strategy Group in Milford, Mass., said it's not unusual for a planned server or storage virtualization project to affect other IT needs. "People typically begin projects for something as simple as server consolidation or resource utilization," Bowker said. "But they find other infrastructure is needed. Once they start rolling out virtualization, they find there are other benefits to be had," including disaster recovery and improved backup capabilities.

Wednesday 6 January 2010

New Advanced Site Recovery (ASR) Option for DataCore SANmelody targets ROBOs and Remote Offices

See ASR description:
http://www.datacore.com/downloads/ASRbackgrounder.pdf

Article:
http://www.channelprosmb.com/article/15736/DataCore-Software-Adds-Advanced-Site-Recovery-ASR-Option-for-SANmelody/

ASR was first developed around SANsymphony to enable larger data centers and organizations to simplify remote site recovery while leveraging readily available IT assets between different sites. In releasing ASR for SANmelody, DataCore is providing a more cost-effective solution for smaller businesses and remote site deployments.

ASR makes distributed disaster recovery (D-DR) flexibility more practical for all organizations, thereby opening the field of users to smaller businesses. ASR can be tailored to whatever a company's cost structure allows.

"With ASR, DataCore has developed a distributed and cost-effective way to have IT assets at remote offices and branch offices (ROBOs) take over for the main data center when the central machines are unable to meet processing obligations," explains Augie Gonzalez, director of product marketing, DataCore Software. "Whether that is during planned facility outage or an unexpected disaster makes no difference. There are major cost savings from repurposing ROBO IT assets and personnel for business continuity."

Gonzalez also says DataCore's approach has an added benefit: regular disaster recovery tests and refresher training can be performed at the branch location during periods of slow activity, while the main data center processing continues undisturbed."

ASR for SANmelody builds on DataCore's universal storage virtualization software to move IT operations from a central site to one (or more) distributed contingency locations--and back again. While SANmelody ASR can be used between two sites, the far more likely scenario is a "hub and spoke" model in which a SANsymphony ASR license will be used in the central site, with connections to multiple remote offices/branch offices running SANmelody ASR.

This solution allows organizations to spread DR responsibilities across several smaller sites. Additionally, the solution makes no distinction between physical and virtual servers, unifying their DR operations in a common, automated process. And, ASR for SANmelody does not depend on duplicating equipment offsite, such as disk arrays and specialized networking gear.

Tuesday 5 January 2010

Virtualisation Predictions: Desktop and Storage Virtualisation are the next "Big Wave"

David Marshall from Infoworld/VMblog has posted a series of interviews on 2010 predictions.
DataCore and 2 DataCore solution providers - Helixstorm and Mirazon Group share their outlook on 2010 trends. Read the full article at:

http://vmblog.com/archive/2009/12/31/desktop-and-storage-virtualization-are-the-next-big-wave.aspx

Desktop and Storage Virtualization are the next "big wave"

DataCore Software's CEO George Teixeira shares his viewpoint on 2010. Plus, representatives from two DataCore solution advisor partners weigh in on what they see coming in the virtualization space in 2010.

George Teixeira - President & CEO of DataCore Software

Prediction: Consolidation was the driver of the first wave of virtualization. In 2010, both storage and desktop virtualization will go mainstream.

Prediction: The growing cost disparity between Hypervisors and traditional SANs and shared storage arrays, needed to support virtual infrastructures, will slow down the pace of virtualization adoption.

Prediction: Microsoft Hyper-V has created another wave - the whole Microsoft world -that is embracing virtualization and this trend will continue to accelerate in 2010.

Prediction: This will be the year of virtual desktop proof of concepts (POCs) and pilots; greatest challenge to success is overcoming the cost of storage and the ability to scale storage effectively.

Prediction: Because virtual servers are starting to become a commodity - this puts even more importance on getting the storage piece right.

Monday 4 January 2010

2 TB max VVol size ... busted! Top News of the Day - DataCore Rocks in 2010 with 1 Petabyte Support

2 TB max VVol size ... busted!
http://www.datacore.com/forum/thread/785/re-2-tb-max-vvol-size-busted-.aspx#post787

THE TOP NEWS OF THE DAY
DataCore Super-Sizes Virtual Disks (Up to 1PB)
With its latest storage virtualization software
http://www.storagenewsletter.com/news/software/datacore-super-sizes-virtual-disks

DataCore Software responds to market demands, this time by stretching the size of its virtual disks from 2 Terabytes (TBs) to 1 Petabyte (PB).

“Rather than inch up to 4 or 16 TBs as others are considering, DataCore made the strategic design choice to blow the roof off the capacity ceiling with 1 Petabyte LUNs,” commented Augie Gonzalez, Director of Product Marketing, DataCore Software. “But we’re still frugal on the back-end, using thin-provisioning to minimize how much real capacity has to be in place day one.”

Performance wise, these immense virtual disks benefit from DataCore’s 1 TB per node, 64-bit 'mega-caches.' “You can be big, and very fast too,” added Gonzalez.