Translate

Monday, 25 February 2013

DataCore Ramps Up Database Performance in a Virtualization World

Virtualize Business Applications

Original Article from: Database Trends and Applications

DataCore Software, a leading provider of storage virtualization software, has made enhancements to its SANsymphony-V Storage Hypervisor. The new capabilities are intended to support customers who are facing high data growth, as well as the need to enable faster response times and provide continuous availability for Tier 1 business-critical applications.

“This is all about customers who are virtualizing their large database systems – whether it is Oracle RAC, SAP, or SQL Server – that class of applications,” Augie Gonzalez, director of product marketing at DataCore, tells 5 Minute Briefing. According to Gonzalez, often when these organizations virtualize their applications they run into some unexpected performance degradation. "Much of what we are doing in this release of the SANsymphony-V goes right at those issues.”

Improving scalability, DataCore gives customers the ability to choose and define cost-effective nodes, and with the enhanced SANsymphony-V, the maximum number of storage virtualization nodes in a centrally managed group doubles from four to eight. This enables both very large scale data centers and cloud service providers to expand into the additional nodes, extending capacity, throughput and connectivity. “The improved scalability is sometimes not used strictly because they need the capacity but also in order to assure resiliency and to avoid the performance hit when a component is lost in the architecture,” observes Gonzalez.

In addition, SANsymphony-V maximizes existing resources, while incorporating the latest technologies like Flash and solid state disks (SSDs) cost effectively. “We have found that the way Flash and SSDs behave as storage devices is a little bit different and little more nuanced from the way that hard disk drives operate and so we have done some special optimizations to account for the way that they absorb data in a ‘bursty’ manner.” With this release, SANsymphony-V accounts for that characteristic so the end user at the application level does not see the downstream behavior and sees continuous smooth operations.


Blog image

SANsymphony-V also enables trend analysis over time with a recording feature that compiles and displays a running chart of metrics gathered from the environment. “For a long time we have had real-time visibility into the systems and that is good if you are keeping an eye on them. What we are doing in this case is enhancing that with trend analysis so that you can look over days, several weeks or a year and see where you are experiencing spikes in terms of users coming on or additional loads that are being submitted into the system. This enables you to look at the downstream consequences of those spikes and see if they require any additional resource to be put in place or better balancing of the resources that you have.” The historical collection may be exported as a CSV file for further analysis, planning and reporting with Microsoft Excel and other tools.


Additional capabilities include customized control for better resource utilization and storage tiering; and a greater degree of continuous data protection (CDP) as well as near-instant restoration for business-critical applications. SANsymphony-V also now runs on the Windows Server 2012 operating system in addition to Windows Server 2008 R2.

Complete information is available about SANsymphony-V Storage Hypervisor.


Wednesday, 20 February 2013

Overview Blog post on Performance Monitoring, Storage Profiles, CDP and Faster Performance in DataCore SANsymphony-V – PSP2 update

DataCore released the second product service pack (PSP) for their SANsymphony-V 9.0. The new PSP contains several fixes to further enhance stability and reliability as well as a number of new features. In this post I’d like to introduce you to the new or enhanced features.

To read the full post, visit: http://vtricks.com/?p=491





 

Performance Recording
Until now there was no possibility to record any performance statistics. You were only able to follow the performance data in a live session. But now they’ve added the possibility to record all statics. To save the data a SQL express instance, which was introduced with SSV 9.0, is used... http://vtricks.com/?p=491

Tuesday, 19 February 2013

DataCore Ramps Up Database Virtualization

http://www.dbta.com/Articles/ReadArticle.aspx?ArticleID=87779&utm_source=dlvr.it&utm_medium=twitter&utm_campaign=itiroundup

DataCore Software, a provider of storage virtualization software, has made enhancements to its SANsymphony-V Storage Hypervisor. The new capabilities are intended to support customers who are facing high data growth, as well as the need to enable faster response times and provide continuous availability for business-critical applications.

“This is all about customers who are virtualizing their large database systems – whether it is Oracle RAC, SAP, or SQL Server – that class of applications,” Augie Gonzalez, director of product marketing at DataCore, tells 5 Minute Briefing. According to Gonzalez, often when these organizations virtualize their applications they run into some unexpected performance degradation. "Much of what we are doing in this release of the SANsymphony-V goes right at those issues.”

Improving scalability, DataCore gives customers the ability to choose and define cost-effective nodes, and with the enhanced SANsymphony-V, the maximum number of storage virtualization nodes in a centrally managed group doubles from four to eight. This enables both very large scale data centers and cloud service providers to expand into the additional nodes, extending capacity, throughput and connectivity. “The improved scalability is sometimes not used strictly because they need the capacity but also in order to assure resiliency and to avoid the performance hit when a component is lost in the architecture,” observes Gonzalez.

In addition, SANsymphony-V maximizes existing resources, while incorporating the latest technologies like Flash and solid state disks (SSDs) cost effectively. “We have found that the way Flash and SSDs behave as storage devices is a little bit different and little more nuanced from the way that hard disk drives operate and so we have done some special optimizations to account for the way that they absorb data in a ‘bursty’ manner.” With this release, SANsymphony-V accounts for that characteristic so the end user at the application level does not see the downstream behavior and sees continuous smooth operations.

SANsymphony-V also enables trend analysis over time with a recording feature that compiles and displays a running chart of metrics gathered from the environment. “For a long time we have had real-time visibility into the systems and that is good if you are keeping an eye on them. What we are doing in this case is enhancing that with trend analysis so that you can look over days, several weeks or a year and see where you are experiencing spikes in terms of users coming on or additional loads that are being submitted into the system. This enables you to look at the downstream consequences of those spikes and see if they require any additional resource to be put in place or better balancing of the resources that you have.” The historical collection may be exported as a CSV file for further analysis, planning and reporting with Microsoft Excel and other tools.

Additional capabilities include customized control for better resource utilization and storage tiering; and a greater degree of continuous data protection (CDP) as well as near-instant restoration for business-critical applications. SANsymphony-V also now runs on the Windows Server 2012 operating system in addition to Windows Server 2008 R2.

Complete information is available about SANsymphony-V Storage Hypervisor.

Wednesday, 13 February 2013

Performance for Tier-1 Applications with SANsymphony-V R9 Storage Hypervisor

Expanding Enterprise IT Environments, Virtualised Applications and Hybrid Clouds Benefit with Extended Scalability, Added Performance Optimisation and Greater Cost Savings from New Configurability Choices

DataCore Software, the premier provider of storage virtualisation software, today introduced major enhancements to its SANsymphony™-V Storage Hypervisor. The new capabilities come at a critical time as CIO reactions to an extraordinary barrage of data, coupled with the need to make business-critical application response times faster and continuously available, are determining competitiveness and which companies come out on top.


“What matters most to a business is the ability to compete, so we deliver an industry-leading user experience with fast, constantly available applications for the greatest possible productivity,” explains George Teixeira, president and CEO of DataCore Software. “Customers clearly want to get more from their tier-1 apps and infrastructure. Whether they’re running SAP, Oracle, Microsoft SQL, SharePoint, Exchange or VDI, they quickly realize SANsymphony-V maximizes performance like no other technology, providing them with a clear business advantage.”

Several of the more visible technology innovations of the new SANsymphony-V are as follows.

Enterprise and Cloud Scalability X 8
As data grows, the ability to easily scale performance and capacity also grows; a critical need is sizing which is often unknown at the start of a new project. DataCore™ provides the freedom to choose and define cost-effective nodes, and with the enhanced SANsymphony-V, the maximum number of storage virtualisation nodes in a centrally managed group doubles from four to eight. This enables large scale data centres and cloud service providers to non-disruptively expand into the additional nodes, extending capacity, throughput and connectivity. Most clients managing capacities in the petabyte range will configure DataCore software in an N+1 redundant grid to achieve continuous availability while reducing the cost of redundancy.

Faster More Predictable Performance: 50 Percent Quicker, Optimized for Flash
Lack of performance or overprovisioning due to unpredictable demands are major cost drivers and throwing costly hardware at the problem is not an optimal or sustainable solution. SANsymphony-V maximizes existing resources, while incorporating the latest technologies like Flash/solid state disks (SSD) cost effectively. The software adds several new features to its adaptive caching algorithms aimed at virtualized, mission critical applications, yielding close to 20 percent faster IOPS and throughput (Megabytes/sec) than earlier versions. These refinements, along with special multi-threaded developments, better leverage processor parallelism to make I/O response approximately 50 percent quicker for transactional workloads.

Equally important, performance spikes are “smoothed” for more predictable application response times, users benefit from a more linear performance growth curve as memory or the latest Flash/SSD innovations are incorporated. 

Application Performance & Storage - New Tuning and Troubleshooting Options
DataCore offers an extensive set of management tools including “heat maps” to optimise performance and cost-effective tiering of storage assets. The new SANsymphony-V adds an ability to do trend analysis over time with a recording feature that compiles and displays a running chart of metrics gathered from the environment. Workload spikes or potential bottlenecks can then be easily addressed. The historical collection may also be exported as a CSV file for further analysis, planning and reporting with Microsoft Excel and other tools.

Customised Control for Better Resource Utilization and Storage Tiering
Every environment is different, so greater control allows even more optimisations to improve costs and performance. Storage profiles for virtual disks may be customised to control how the dynamic policies for auto-tiering, remote replication and synchronous mirror recovery are prioritized. These custom profiles supplement default policies built into the software. The importance of virtual disks can be set to critical, high, normal, low, or archive to control which volumes take precedence when competing for shared resources. This ensures important applications benefit from more valuable resources such as flash memory and SSDs, with less demanding tasks using lower cost, higher density storage.

Fast, Simple “Undo” Continuous Data Protection, Rapid Restore for Critical Applications
Business critical applications need to be up-and-running and SANsymphony-V now offers a greater degree of Continuous Data Protection (CDP) and near instant restoration. In addition, conventional nightly back-up windows either no longer exist or are difficult to schedule therefore the ability to do non-disruptive backups and do them when time permits is a major business benefit. The recovery window and running log for continuous data protection has been extended from hours to two weeks. This helps IT rapidly restore virtual volumes to a point in time before malware, logic errors or user mistakes occurred, even if detected several days later. System administrators may also rewind a virtual disk image to any point within the 14 day rolling window. As a result, the rollback image can be mounted to recover specific files accidentally deleted, and/or, the current disk can be replaced with the rollback image to completely undo changes that transpired after malware infection or logic errors began.

New Windows Server 2012 Platform
SANsymphony-V now runs on the Windows Server 2012 operating system in addition to Windows Server 2008 R2. The storage hypervisor can run on dedicated physical Windows servers to virtualise storage over a SAN for numerous hosts. It can also co-reside with virtualised applications hosted by Microsoft Hyper-V 3.0 and VMware vSphere 5.x. In either case, DataCore maximizes the performance, availability and utilisation of internal and directly attached storage (DAS), as well as external disk arrays.

Lower Cost to Unify Storage and Clustering- Highly Available NAS/SAN
DataCore takes advantage of new capabilities in the Windows Server 2012 platform for more powerful and cost-effective unified network attached storage/storage area network (NAS/SAN) capabilities. Fully redundant, highly available configurations scale out to more nodes and quickly switchover network file system (NFS) and common Internet file system (CIFS) SMB clients despite hardware and facility outages. Customers will find the storage solution even more attractive since Microsoft has made failover clustering available in its lower cost Standard Edition, allowing files to be de-duplicated to save disk space.

Pricing and Availability
The latest version of SANsymphony-V R9 will be generally available starting February 2013.
Pricing starts under $10,000 for two licenses used in highly available configurations and includes 24x7 annual technical support and new version rights. Existing DataCore SANsymphony-V customers under annual support contracts may upgrade at no-charge.

DataCore Software Announces Hot New Capabilities and Turbo Charged Performance for Tier-1 Applications with SANsymphony-V R9 Storage Hypervisor

Expanding Enterprise IT Environments, Virtualized Applications and Hybrid Clouds Benefit with Extended Scalability, Added Performance Optimization and Greater Cost Savings from New Configurability Choices

DataCore Software, the premier provider of storage virtualization software, today introduced major enhancements to its SANsymphony™-V Storage Hypervisor. The new capabilities come at a critical time as CIO reactions to an extraordinary barrage of data, coupled with the need to make business-critical application response times faster and continuously available, are determining competitiveness and which companies come out on top.

“What matters most to a business is the ability to compete, so we deliver an industry-leading user experience with fast, constantly available applications for the greatest possible productivity,” explains George Teixeira, president and CEO of DataCore Software. “Customers clearly want to get more from their tier-1 apps and infrastructure. Whether they’re running SAP, Oracle, Microsoft SQL, SharePoint, Exchange or VDI, they quickly realize SANsymphony-V maximizes performance like no other technology, providing them with a clear business advantage.”

Several of the more visible technology innovations of the new SANsymphony-V are as follows.

Enterprise and Cloud Scalability X 8
As data grows the ability to easily scale performance and capacity also grows; a critical need is sizing which is often unknown at the start of a new project. DataCore provides the freedom to choose and define cost-effective nodes, and with the enhanced SANsymphony-V, the maximum number of storage virtualization nodes in a centrally managed group doubles from four to eight. This enables large scale data centers and cloud service providers to non-disruptively expand into the additional nodes, extending capacity, throughput and connectivity. Most clients managing capacities in the petabyte range will configure DataCore™ software in an N+1 redundant grid to achieve continuous availability while reducing the cost of redundancy.

Faster & More Predictable Performance: 50 Percent Quicker, Optimized for Flash
Lack of performance or overprovisioning due to unpredictable demands are major cost drivers and throwing costly hardware at the problem is not an optimal or sustainable solution. SANsymphony-V maximizes existing resources, while incorporating the latest technologies like flash/solid state disks (SSD) cost effectively. The software adds several new features to its adaptive caching algorithms aimed at virtualized, mission critical applications, yielding close to 20 percent faster IOPS and throughput (Megabytes/sec) than earlier versions. These refinements, along with special multi-threaded developments, better leverage processor parallelism to make I/O response approximately 50 percent quicker for transactional workloads.

Equally important, performance spikes are “smoothed” for more predictable application response times, with users benefitting from a more linear performance growth curve as memory or the latest flash/SSD innovations are incorporated.

Application Performance & Storage - New Tuning and Troubleshooting Options
DataCore offers an extensive set of management tools including “heat maps” to optimize performance and cost-effective tiering of storage assets. The new SANsymphony-V adds an ability to do trend analysis over time with a recording feature that compiles and displays a running chart of metrics gathered from the environment. Workload spikes or potential bottlenecks can then be easily addressed. The historical collection may also be exported as a CSV file for further analysis, planning and reporting with Microsoft Excel and other tools.

Customized Control for Better Resource Utilization and Storage Tiering
Every environment is different, so greater control allows even more optimizations to improve costs and performance. Storage profiles for virtual disks may be customized to control how the dynamic policies for auto-tiering, remote replication and synchronous mirror recovery are prioritized. These custom profiles supplement default policies built into the software. The importance of virtual disks can be set to critical, high, normal, low, or archive to control which volumes take precedence when competing for shared resources. This ensures important applications benefit from more valuable resources such as flash memory and SSDs, with less demanding tasks using lower cost, higher density storage.

Fast, Simple “Undo” Continuous Data Protection, Rapid Restore for Critical Applications
Business critical applications need to be up-and-running and SANsymphony-V now offers a greater degree of Continuous Data Protection (CDP) and near instant restoration. In addition, conventional nightly back-up windows either no longer exist or are difficult to schedule, therefore, the ability to do non-disruptive backups and do them when time permits is a major business benefit. The recovery window and running log for continuous data protection has been extended from hours to two weeks. This helps IT rapidly restore virtual volumes to a point in time before malware, logic errors or user mistakes occurred, even if detected several days later. System administrators may also rewind a virtual disk image to any point within the 14 day rolling window. As a result, the rollback image can be mounted to recover specific files accidentally deleted, and/or, the current disk can be replaced with the rollback image to completely undo changes that transpired after malware infection or logic errors began.

New Windows Server 2012 Platform
SANsymphony-V now runs on the Windows Server 2012 operating system in addition to Windows Server 2008 R2. The storage hypervisor can run on dedicated physical Windows servers to virtualize storage over a SAN for numerous hosts. It can also co-reside with virtualized applications hosted by Microsoft Hyper-V 3.0 and VMware vSphere 5.x. In either case, DataCore maximizes the performance, availability and utilization of internal and directly attached storage (DAS), as well as external disk arrays.

Lower Cost to Unify Storage and Clustering- Highly Available NAS/SAN
DataCore takes advantage of new capabilities in the Windows Server 2012 platform for more powerful and cost-effective unified network attached storage/storage area network (NAS/SAN) capabilities. Fully redundant, highly available configurations scale out to more nodes and quickly switchover network file system (NFS) and common Internet file system (CIFS) SMB clients despite hardware and facility outages. Customers will find the storage solution even more attractive since Microsoft has made failover clustering available in its lower cost Standard Edition, allowing files to be de-duplicated to save disk space.

Pricing and Availability
The latest version of SANsymphony-V R9 will be generally available starting February 2013.
Pricing starts under $10,000 for two licenses used in highly available configurations and includes 24x7 annual technical support and new version rights. Existing DataCore SANsymphony-V customers under annual support contracts may upgrade at no-charge.

Beware Hardware Vendors’ Claims for Virtualization and Software-Defined Storage; Hardware-defined = Over Provisioning and Oversizing

 

http://www.virtual-strategy.com/2013/01/24/beware-%E2%80%9Cold-acquaintance%E2%80%9D-hardware-vendors%E2%80%99-claims-virtualization-and-software-defined-st
 
2013 is the year of the phrase “software defined” infrastructures. Virtualization has taught us that the efficiency and economy of complex, heterogeneous, IT infrastructures are created by enterprise software that takes separate infrastructure components and turns them into a coherent manageable whole—allowing the many to work as ‘one.’
 
Infrastructures are complex and diverse, and as such, no one device defines them. That’s why phrases like “we’re an IBM shop” or “we’re an EMC shop,” once common are heard less often today. Instead, the infrastructure is defined where the many pieces come together to give us the flexibility, power and control over all this diversity and complexity—at the software virtualization layer.
 
Beware of “Old Acquaintance” Hardware Vendors’ Claims that They are “Software-Defined.”
 
It’s become “it’s about the software, dummy” obvious. But watch- In 2013, you’ll see storage hardware heavyweights leap for that bandwagon, claiming that they are “software-defined storage,” hoping to slow the wheels of progress under their heft. But, like Auld Lang Sine, it’s the same old song they sing every year: ignore the realities driving today’s diverse infrastructures—buy more hardware; forget that the term ‘software-defined’ is being applied exclusively to what runs on their storage hardware platforms and not all the other components and players—beware, the song may sound like ‘software-defined’ but the end objective is clear: ‘buy more hardware.’
 
Software is what endures beyond hardware devices that 'come and go.'
 
Think about it. Why would you want to lock yourself into this year’s hardware solution or have to buy a specific device just to get a software feature you need? This is old thinking, before virtualization, this was how the server industry worked. The hardware decision drove the architecture, today with software-defined computing exemplified by VMware or Hyper-V, you think about how to deploy virtual machines versus are they running on a Dell, HP, Intel or IBM system. Storage is going through this same transformation and it will be smart software that makes the difference in a ‘software-defined’ world.
 
So What Do Users Want from “software-defined storage,” and Can You Really Expect It to Come from a Storage Hardware Vendor?
 
The move from hardware-defined to a software-defined virtualization-based model supporting mission-critical business applications is inevitable and has already redefined the foundation of architectures at the computing, networking and storage levels from being ‘static’ to ‘dynamic.’ Software defines the basis for managing diversity, agility, user interactions and for building a long-term virtual infrastructure that adapts to the constantly changing components that ‘come and go’ over time.
 
Ask yourself, is it really in the best interest of the traditional storage hardware vendors to go ‘software-defined’ and avoid their platform lock-ins?
 
Hardware-defined = Over Provisioning and Oversizing
 
Fulfilling application needs and providing a better user experience are the ultimate drivers for next generation storage and software-defined storage infrastructures. Users want flexibility, greater automation, better response times and ’always on’ availability. Therefore IT shops are clamoring to move all the applications onto agile virtualization platforms for better economics and greater productivity. The business critical Tier 1 applications (ERP, databases, mail systems, OLTP etc.) have proven to be the most challenging. Storage has been the major roadblock to virtualizing these demanding Tier 1 applications. Moving storage-intensive workloads onto virtual machines (VMs) can greatly impact performance and availability, and as the workloads grow, these impacts increase, as does cost and complexity.
The result is that storage hardware vendors have to over provision, oversize for performance and build in extra levels of redundancy within each unique platform to ensure users can meet their performance and business continuity needs.
The costs needed to accomplish the above negate the bulk of the benefits. In addition, hardware solutions are sized for a moment in time versus providing long term flexibility, therefore enterprises and IT departments are looking for a smarter and more cost-effective approach and are realizing that traditional ‘throw more hardware’ solutions at the problem are no longer feasible.
 
Tier 1 Apps are Going Virtual; Performance and Availability are Mission Critical
 
To address these storage impacts, users need the flexibility to incorporate whatever storage they need to do the job at the right price, whether it is available today or comes along in the future. For example, to help with the performance impacts, such as those encountered in virtualizing Tier 1 applications, users will want to incorporate and share SSD, flash-based technologies. Flash helps here for a simple reason: electronic memory technologies are much faster than mechanical disk drives. Flash has been around for years, but only recently have they come down far enough in price to allow for broader adoption.
 
Diversity and Investment Protection; One Size Solutions Do Not Fit All
 
But flash storage is better for read intensive applications versus write heavy transaction-based traffic and it is still significantly more expensive than a spinning disk. It also wears out. Taxing applications that prompt many writes can shorten the lifespan of this still costly solution. So, it makes sense to have other choices for storage alongside flash to keep flash reserved for where it is needed most and to use the other storage alternatives for their most efficient use cases, and to then optimize the performance and cost trade-offs by placing and moving data to the most cost-effective tier that can deliver acceptable performance. Users will need solutions to share and tier their diverse storage arsenal – and manage it together as one, and that requires smart and adaptable software.
 
And what about existing storage hardware investments, does it make sense to throw them away and replace them with this year’s new models when smart software can extend their useful life? Why ‘rip and replace’ each year? Instead, these existing storage investments and the newest Flash hardware devices, disk drives and storage models can easily be made to work together in harmony; within a software-defined storage world.
 
Better Economics and Flexibility Make the Move to ‘Software-defined Storage’ Inevitable
Going forward, users will have to embrace ‘software-defined storage’ as an essential element to their software-defined data centers. Virtual storage infrastructures make sense as the foundation for scalable, elastic and efficient cloud computing. As users have to deal with the new dynamics and faster pace of today’s business, they can no longer be trapped within yesterday’s more rigid and hard-wired architecture models.

‘Software-defined’ architecture and not the hardware is what matters.
Clearly the success of software-defined computing solutions from VMware and Microsoft Hyper-V have proven the compelling value proposition that server virtualization delivers. Likewise, the storage hypervisor and the use of virtualization at the storage level are the key to unlocking the hardware chains that have made storage an anchor to next generation data centers.
 
‘Software-defined Storage’ Creates the Need for a Storage Hypervisor
 
We need the same thinking that revolutionized servers to impact storage. We need smart software that can be used enterprise-wide to be the driving force for change, in effect we need a storage hypervisor whose main role is to virtualize storage resources and to achieve the same benefits – agility, efficiency and flexibility – that server hypervisor technology brought to processors and memory.
Virtualization has transformed computing and therefore the key applications we depend on to run our businesses need to go virtual as well. Enterprise and cloud storage are still living in a world dominated by physical and hardware-defined thinking. It is time to think of storage in a ‘software-defined’ world. That is, storage system features need to be available enterprise-wide and not just embedded to a particular proprietary hardware device.
For 2013, be cautious and beware of “old acquaintance” hardware vendors’ claims that they are “software-defined.”

Monday, 11 February 2013

Dot Hill Storage Certified 'DataCore Ready' for SANsymphony-V

Dot Hill Systems and DataCore Software Partner to Deliver Integrated Storage Solutions
http://investors.dothill.com/releasedetail.cfm?ReleaseID=735978

Dot Hill Systems Corp., a provider of world-class storage solutions and software, today announced that its AssuredSAN(TM) 3000 series storage arrays are 'DataCore Ready' and provide full interoperability with the SANsymphony-V storage hypervisor from DataCore Software.

The combination of Dot Hill AssuredSAN storage and DataCore's SANsymphony-V storage hypervisor can assist end users in simplifying storage management, boosting performance and greatly improving data availability. Certification under the DataCore Ready program provides end users with the confidence that Dot Hill AssuredSAN 3000 storage solutions have undergone verification testing to ensure joint solution compatibility.

"We have a great working relationship with Dot Hill which has established itself as a leading manufacturer of storage solutions," stated Carlos M. Carreras, vice president of alliances and business development at DataCore Software. "This allows both our channel partners and end users to leverage a powerful, adaptable and scalable storage solution that will not only increase performance and functionality, but have a positive impact on their bottom line."

SANsymphony-V provides centralized storage management under a single pane of control to simplify and optimize existing storage assets while delivering enterprise-class features. The DataCore storage hypervisor will enable Dot Hill's AssuredSAN 3000 series with added performance acceleration through adaptive caching, space efficiency with thin provisioning and a multitude of high availability (HA) features including synchronous mirroring, and continuous data protection (CDP) for any point-in-time backup and recovery.

"SANsymphony-V software from DataCore complements the Dot Hill AssuredSAN 3000 series storage hardware to deliver an affordable solution boasting a wide range of features normally found only on high-end enterprise solutions," said Jim Jonez, senior director of marketing at Dot Hill. "Working in close collaboration with leading technology partners, such as DataCore, we can deliver more powerful storage solutions to the end user at a very compelling price."

Dot Hill Systems and DataCore Software
With a combined 44 years of storage industry experience, Dot Hill Systems and DataCore have served clients representing the broad spectrum of IT organizations around the globe, spanning all the major vertical industries. Their combined expertise has served both end users and many of today's leading OEM manufacturers.

The DataCore Ready Program
The DataCore Ready program identifies solutions that are trusted to enhance DataCore SANsymphony-V Storage Hypervisor-based infrastructures. While DataCore solutions interoperate with common open & industry standard products, those listed as DataCore Ready have completed additional verification testing to ensure a superior level of joint solution compatibility. Only third party products that successfully meet the verification criteria set by DataCore are qualified as DataCore Ready.

Friday, 8 February 2013

DataCore Software announces new Managing Sales Director for Northern Europe


Industry Storage Veteran, Bjarne Poulsen, takes the reins across UK/Ireland/Nordics/Middle East.

DataCore Software Corporation announced the appointment of Bjarne Poulsen to the position of sales director for UK/Ireland/Nordics and Middle East, as it becomes designated a key growth region for the company.

With more than 26 years experience in the storage industry, he served for 15 years in Hitachi Data Systems (HDS) in various positions including MD/VP for the UK/Nordics Region. He also held the position as VP of marketing for HDS in EMEA. After HDS, he was senior sales director for EMEA in Brocade Communications and GM EMEA for three storage start-ups, Nishan Systems, Scale Computing and Storsimple. He began his career at IBM.

"It's an exciting challenge that I am looking forward to embracing," Bjarne commented. "We already have more than 500 customers in the region and with the growing wave of virtualisation adoption, more and more customers are now looking to DataCore's Storage Hypervisor, SANsymphony V 9.0 to accelerate the performance of their tier 1 applications and provide continuous business availability and optimal storage resource utilisation at a cost effective price."

Bjarne's appointment is effective immediately.

Thursday, 7 February 2013

Beware “Old Acquaintance” Hardware Vendors’ Claims for Virtualization and Software-Defined Storage; Hardware-defined = Over Provisioning and Oversizing



2013 is the year of the phrase “software defined” infrastructures. Virtualization has taught us that the efficiency and economy of complex, heterogeneous, IT infrastructures are created by enterprise software that takes separate infrastructure components and turns them into a coherent manageable whole—allowing the many to work as ‘one.’

Infrastructures are complex and diverse, and as such, no one device defines them. That’s why phrases like “we’re an IBM shop” or “we’re an EMC shop,” once common are heard less often today. Instead, the infrastructure is defined where the many pieces come together to give us the flexibility, power and control over all this diversity and complexity—at the software virtualization layer.

Beware of “Old Acquaintance” Hardware Vendors’ Claims that They are “Software-Defined.”

It’s become “it’s about the software, dummy” obvious. But watch- In 2013, you’ll see storage hardware heavyweights leap for that bandwagon, claiming that they are “software-defined storage,” hoping to slow the wheels of progress under their heft. But, like Auld Lang Sine, it’s the same old song they sing every year: ignore the realities driving today’s diverse infrastructures—buy more hardware; forget that the term ‘software-defined’ is being applied exclusively to what runs on their storage hardware platforms and not all the other components and players—beware, the song may sound like ‘software-defined’ but the end objective is clear: ‘buy more hardware.’

Software is what endures beyond hardware devices that 'come and go.'

Think about it. Why would you want to lock yourself into this year’s hardware solution or have to buy a specific device just to get a software feature you need? This is old thinking, before virtualization, this was how the server industry worked. The hardware decision drove the architecture, today with software-defined computing exemplified by VMware or Hyper-V, you think about how to deploy virtual machines versus are they running on a Dell, HP, Intel or IBM system. Storage is going through this same transformation and it will be smart software that makes the difference in a ‘software-defined’ world.

So What Do Users Want from “software-defined storage,” and Can You Really Expect It to Come from a Storage Hardware Vendor?

The move from hardware-defined to a software-defined virtualization-based model supporting mission-critical business applications is inevitable and has already redefined the foundation of architectures at the computing, networking and storage levels from being ‘static’ to ‘dynamic.’ Software defines the basis for managing diversity, agility, user interactions and for building a long-term virtual infrastructure that adapts to the constantly changing components that ‘come and go’ over time.

Ask yourself, is it really in the best interest of the traditional storage hardware vendors to go ‘software-defined’ and avoid their platform lock-ins?

Hardware-defined = Over Provisioning and Oversizing

Fulfilling application needs and providing a better user experience are the ultimate drivers for next generation storage and software-defined storage infrastructures. Users want flexibility, greater automation, better response times and ’always on’ availability. Therefore IT shops are clamoring to move all the applications onto agile virtualization platforms for better economics and greater productivity. The business critical Tier 1 applications (ERP, databases, mail systems, OLTP etc.) have proven to be the most challenging. Storage has been the major roadblock to virtualizing these demanding Tier 1 applications. Moving storage-intensive workloads onto virtual machines (VMs) can greatly impact performance and availability, and as the workloads grow, these impacts increase, as does cost and complexity.

The result is that storage hardware vendors have to over provision, oversize for performance and build in extra levels of redundancy within each unique platform to ensure users can meet their performance and business continuity needs.

The costs needed to accomplish the above negate the bulk of the benefits. In addition, hardware solutions are sized for a moment in time versus providing long term flexibility, therefore enterprises and IT departments are looking for a smarter and more cost-effective approach and are realizing that traditional ‘throw more hardware’ solutions at the problem are no longer feasible.

Tier 1 Apps are Going Virtual; Performance and Availability are Mission Critical

To address these storage impacts, users need the flexibility to incorporate whatever storage they need to do the job at the right price, whether it is available today or comes along in the future. For example, to help with the performance impacts, such as those encountered in virtualizing Tier 1 applications, users will want to incorporate and share SSD, flash-based technologies. Flash helps here for a simple reason: electronic memory technologies are much faster than mechanical disk drives. Flash has been around for years, but only recently have they come down far enough in price to allow for broader adoption.

Diversity and Investment Protection; One Size Solutions Do Not Fit All

But flash storage is better for read intensive applications versus write heavy transaction-based traffic and it is still significantly more expensive than a spinning disk. It also wears out. Taxing applications that prompt many writes can shorten the lifespan of this still costly solution. So, it makes sense to have other choices for storage alongside flash to keep flash reserved for where it is needed most and to use the other storage alternatives for their most efficient use cases, and to then optimize the performance and cost trade-offs by placing and moving data to the most cost-effective tier that can deliver acceptable performance. Users will need solutions to share and tier their diverse storage arsenal – and manage it together as one, and that requires smart and adaptable software.

And what about existing storage hardware investments, does it make sense to throw them away and replace them with this year’s new models when smart software can extend their useful life? Why ‘rip and replace’ each year? Instead, these existing storage investments and the newest Flash hardware devices, disk drives and storage models can easily be made to work together in harmony; within a software-defined storage world.

Better Economics and Flexibility Make the Move to ‘Software-defined Storage’ Inevitable
Going forward, users will have to embrace ‘software-defined storage’ as an essential element to their software-defined data centers. Virtual storage infrastructures make sense as the foundation for scalable, elastic and efficient cloud computing. As users have to deal with the new dynamics and faster pace of today’s business, they can no longer be trapped within yesterday’s more rigid and hard-wired architecture models.

‘Software-defined’ architecture and not the hardware is what matters.
Clearly the success of software-defined computing solutions from VMware and Microsoft Hyper-V have proven the compelling value proposition that server virtualization delivers. Likewise, the storage hypervisor and the use of virtualization at the storage level are the key to unlocking the hardware chains that have made storage an anchor to next generation data centers.

‘Software-defined Storage’ Creates the Need for a Storage Hypervisor

We need the same thinking that revolutionized servers to impact storage. We need smart software that can be used enterprise-wide to be the driving force for change, in effect we need a storage hypervisor whose main role is to virtualize storage resources and to achieve the same benefits – agility, efficiency and flexibility – that server hypervisor technology brought to processors and memory.

Virtualization has transformed computing and therefore the key applications we depend on to run our businesses need to go virtual as well. Enterprise and cloud storage are still living in a world dominated by physical and hardware-defined thinking. It is time to think of storage in a ‘software-defined’ world. That is, storage system features need to be available enterprise-wide and not just embedded to a particular proprietary hardware device.

For 2013, be cautious and beware of “old acquaintance” hardware vendors’ claims that they are “software-defined.”