Translate

Friday 18 December 2015

Making Data Highly Available on Flash and DRAM

George Teixeira, CEO & President and Nick Connolly, Chief Scientist at DataCore Software discuss how DataCore's Software-Defined Storage solution takes advantage of flash and DRAM technologies to provide high availability and the right performance for your applications.


How Software-Defined Storage Enhances Hyper-converged Storage 
One of the fundamental requirements for virtualizing applications is shared storage. Applications can move around to different servers as long as those servers have access to the storage with the application and its data. Typically, shared storage takes place over a storage network known as a SAN. However, SANs typically run into issues in a virtual environment, so organizations are currently looking for new options. Hyper-converged infrastructure is a solution that seems well-suited to address these issues.
This following white paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of software-defined storage as a solution to provide reliable application performance and a highly available infrastructure.

Saturday 12 December 2015

DataCore achieves SAP-HANA certification, first Software-defined Storage certified to operates across multiple vendors

We are pleased to announce the certification of SANsymphony™-V with the SAP HANA® platform. DataCore™ SANsymphony-V is storage infrastructure software that operates across multiple vendors’ storage systems to deliver the performance and availability required by demanding enterprise-class applications such as SAP HANA.
What is SAP HANA?
The SAP HANA in-memory database lets organizations process vast amounts of transactional, analytical and application data in real-time using a computer’s main memory. Its platform provides libraries for predictive, planning, text processing, spatial and business analytics.
Key Challenges for SAP HANA implementation:
SAP HANA demands a storage infrastructure to process data at an unprecedented speed and has zero-tolerance for downtime. Most organizations store entire SAP HANA multi-terabyte production systems on high-performance Tier 1 storage to meet the performance required during peak processing cycles, such as “period end,” or seasonal demand spikes.  This practice presents the following challenges to IT departments:
  • Tier 1 storage is expensive to deploy and significantly impact the IT budget.
  • Tier 1 storage is limited to its physical constraints when it comes to data availability, staging, reporting, and test and development.
  • Managing multiple storage systems (existing and new) can add considerable cost and complexity; routine tasks like test/dev and reporting are difficult to manage.
Benefits of DataCore
DataCore SANsymphony-V is the first Software-defined Storage solution that is certified to operate across multiple vendors’ SAP HANA-certified storage systems to deliver the performance and availability required.  DataCore SANsymphony-V software provides the essential enterprise-class storage functionality needed to support the real-time applications offered by the SAP HANA® platform.
With DataCore, SAP HANA customers gain:
  • Choice:  Companies have the choice of using existing and/or new SAP HANA certified storage systems, with the ability to seamlessly manage and scale their data storage architectures as well as giving them more purchasing power (no vendor lock-in)
  • Performance:  Accelerate I/O with DataCore™ Adaptive Parallel I/O architecture as well as caching to take full advantage of SAP HANA. in-memory capabilities to transform transactions, analytics, text analysis, predictive and spatial processing.
  • Cost-efficiency:  DataCore reduces the amount of Tier 1 storage space needed, and makes the best use of lower cost persistent HANA-certified storage.


 DataCore SANsymphony-V infrastructure software is the only SAP HANA-certified SDS solution that can be used together with an SAP-certified storage solution from Fujitsu, Huawei, IBM, Dell, NEC, Nimble Storage, Pure Storage, Fusion-io, Violin Memory, EMC, NetApp, HP and Hitachi.

Tuesday 8 December 2015

Software Defined Storage meets Parallel I/O; The impact on Hyperconvergence




                    George Crump Storage Switzerland

http://storageswiss.com/2015/12/01/software-defined-storage-meets-parallel-io/

In terms of storage performance, the actual drive  is no longer the bottleneck. Thanks to flash storage, attention has turned to the hardware and software  that surrounds them, especially the capabilities of the CPU that drives the storage software. The importance of CPU power is evidenced by the increase in overall storage system performance when an all-flash array vendor releases a new storage system. The flash media in that system doesn’t change, but overall performance does increase. But that increase in performance is not as optimal as it should be. The lack of achieving optimal performance is a result of storage software not taking advantage of the parallel nature of the modern CPU.

Moore’s Law Becomes Moore’s Suggestion

Moore’s Law is an observation by Intel co-founder Gordon Moore. The simplified version of this law states that number of transistors  will double every two years. IT professionals assumed that meant that the CPU they buy would get significantly faster every two years or so. Traditionally, this meant that the clock speed of the processor would increase, but recently Intel has hit a wall because increasing clock speeds also led to increased power consumption and heat problems. Instead of increasing clock speed, Intel has focused on adding more cores per processor. The modern data center server has essentially become a parallel computer.

Multiple cores per processor are certainly an acceptable method of increasing performance and continuing to advance Moore’s law. Software, however, does need to be re-written to take advantage of this new parallel computing environment. Parallelization of software is required by operating systems, application software and of course storage software. The re-coding of software to make it parallel is challenging. The key is to manage I/O timing and locking, making multli-threading a storage application more difficult than a video rendering project for example. As a result, it has taken time to get to the point where the majority of operating systems and application software has some flavor of parallelism.

Lagging far behind in the effort to take full advantage of the modern processor is storage software. Most storage software, either built into the array or the new crop of software defined storage (SDS) solutions, are unable to exploit the wide availability of processing cores. They are primarily single core. As a worst case, they are only using one core per processor; at its best, they are using one core per function. If cores are thought of as workers, it is best to have all the workers available to all the tasks, rather than each worker focused on a single task.

Why Cores Matter

The importance of using cores efficiently has only recently become important. Most legacy storage systems were hard drive based, lacking advanced caching or flash media to drive performance. As a result, the need to support efficiently the multi-core environment was not as obvious as it is now that systems have a higher percentage of flash storage. The lack of multi-core performance was overshadowed by the latency of the hard disk drive. Flash and storage response time is just one side of the I/O equation. On the other side, the data center is now populated with highly dense virtual environments or, even more contentious, hyper-converged architectures. Both of these environments generate a massive amount of random I/Os that, thanks to flash, the storage system should be able to handle very quickly. The storage software is the interconnect between the I/O requester and the I/O deliverer and if it can’t efficiently support all the cores it has at its disposal then it becomes the bottleneck.

All storage systems that leverage Intel CPUs face the same challenge; how to leverage CPUs that are increasing in cores, but not in raw speed. In other words, they don’t perform a single process faster but they do perform multiple processes at the same speed simultaneously, netting in faster overall completion time, if the cores are used efficiently. Storage software needs to adapt and become multi-threaded so it can distribute I/O across these functions, taking full advantage of multiple cores.

For most vendors this may mean a complete re-write of their software, which will take time, effort and risk incompatibility with their legacy storage systems.

How Vendor’s Fake Parallel I/O

Vendors have tried several techniques to try to leverage the reality of multiple cores without specifically “parallelizing” their code. Some storage system vendors have tried to tie specific cores to specific storage processing tasks. For example, one core may handle raw inbound I/O while another handles RAID calculations. Other vendors will distribute storage processing tasks in a round robin fashion. If cores are thought of as workers, this technique treats cores as individuals instead of a team. As each task comes in each core is assigned a task, but only that core can work on that task. If it is a big task, it can’t get help from the other cores. While this technique does distribute the load, it doesn’t allow multiple workers to work on the same task at the same time. Each core has to do its own heavy lifting.

Scale-out storage systems are similar in that they leverage multiple processors within each node of the storage cluster, but that are not granular enough to assign multiple cores to the same task. They, like the systems described above, typically have a primary node that acts as a task delegator and assigns the I/O to a specific node, and that specific node handles storing the data and managing data protection.

These designs count on the I/O to come from multiple sources so that each discrete I/O stream can be processed by one of the available cores. These systems will claim very high IOPS numbers, but require multiple applications to get there. They work best in an environment that requires a million IOPS because it has ten workloads all generating 100,000 IOPS instead of an environment that has one workload that generates 1 million IOPS and no other workloads over 5,000. To some extent vendors also “game” the benchmark by varying I/O size and patterns (random vs. sequential) to achieve a desired result. The problem is this I/O  is not the same as what customers will see in their data centers.

The Impact of True Parallel I/O

True parallel I/O utilizes all the available cores across all the available processors. Instead of siloing a task to a specific core, it assigns all the available cores to all the tasks. In other words, it treats the cores as members of a team. Parallel I/O storage software works well on either type of workload environment, ten generating 100k IOPS or one generating 1 million IOPS.

Parallel I/O is a key element in powering the next generation data center because the storage processing footprint can be dramatically reduced and can match the reduced footprint of solid-state storage and server virtualization. Parallel I/O provides many benefits to the data center:


  • Full Flash Performance


  • As stated earlier, most flash systems show improved performance when more processing power is applied to the system. Correctly leveraging cores with multi-threading, delivers the same benefit without having to upgrade processing power. If the storage software is truly parallel, then the storage software can deliver better performance with less processing power, which drives costs down while increasing scalability.


    • Predictable Hyper-Converged Architectures


    • Hyper-converged architectures are increasing in popularity thanks to available processing power at the computer tier. Hypervisors do a good job of utilizing multi-core processors. The problem is that a single threaded storage software component becomes the bottleneck. Often the key element of hyper-convergence, storage software, is isolated to one core per hyper-converged node. These cores can be overwhelmed if there is a performance spike leading to inconsistent performance that could impact the user experience. Also to service many VMs and critical business applications, they typically need to throw more and more nodes at the problem, impacting the productivity and cost saving benefits derived from consolidating more workload on fewer servers. A storage software solution that is parallel can leverage or share multiple cores in each node. The result is more virtual machines per host, less nodes to manage and more consistent storage I/O performance even under load.


      • Scale Up Databases


      • While they don’t get the hype of modern NoSQL databases, traditional, scale up databases (e.g., Oracle, Microsoft SQL) are still at the heart of most organizations. Because the I/O stream is from a single application, they don’t generate enough independent parallel I/O so it can be distributed to specific cores. The parallel I/O software’s ability to make multiple cores act as one is critical for this type of environment. It allows scale up environments to scale further than ever.

        Conclusion

        The data center is increasingly becoming denser; more virtual machines are stacked on virtual hosts, legacy applications are expected to support more users per server, and more IOPS are expected from the storage infrastructure. While the storage infrastructure now has the right storage media (flash) in place to support the consolidation of the data center, the storage software needs to support the available compute power. The problem is that compute power is now delivered via multiple cores per processor instead of a single processor. Storage software that has parallel I/O will be able to take full advantage of the processor reality and support these dense architectures with a storage infrastructure that is equally dense.




        Monday 7 December 2015

        DataCore Certifies Universal VVols - Brings VMware VVols Benefits to Existing Storage and to Non-VVol Certified Storage

        Universal Virtual Volumes: Extends VMware’s VVOL Benefits to Storage Systems that do not Support it
        Many VMware administrators crave the power and fine-grain control promised by vSphere Virtual Volumes (VVOLs). However, most current storage arrays and systems do not support it. Manufacturers simply cannot afford to retrofit equipment with the new VM-aware interface.
        DataCore offers these customers the chance to benefit from VVOLs on EMC, IBM, HDS, NetApp and other popular storage systems and all flash arrays simply by layering SANsymphony™-V storage virtualization software in front of them. The same is true for direct-attached storage (DAS) pooled by the DataCore™ Hyper-converged Virtual SAN.
        Now, vSphere administrators can self-provision virtual volumes from virtual storage pools -- they specify the capacity and class of service without having to know anything about the hardware.





























        Friday 4 December 2015

        DataCore and Fujitsu Announce New Hyper-Converged Appliances, SAP HANA-certification, Adaptive Parallel I/O Software and Universal Virtual Volumes (VVOLs)

         DataCore Software Corporation has unveiled second generation Storage Virtualization Appliance (SVA) and a new Hyper-Converged Appliance (HCA), both jointly developed and supported with Fujitsu Ltd. and available for shipment by end of the year.


        New opportunities are available for Fujitsu customers and partners as a result of DataCore achieving certification of SANsymphony-V storage virtualization for SAP HANA on Fujitsu ETERNUS storage.
        "Our partnership with DataCore continues to provide many benefits to our growing base of customers. Our joint solutions combine industry leading Fujitsu hardware platforms and DataCore software into SVA and HCA that help customers easily deploy and manage their enterprise solutions to run their core business applications. With today's certification, DataCore officially supports SAP HANA on Fujitsu ETERNUS storage and adds its software-defined storage features, performance acceleration and HA capabilities  to complement our SAP portfolio in an ideal way," says Jörg Brünig, senior director, Fujitsu.

        With SVA, Fujitsu and DataCore provide a series of tested turnkey appliances for SAN virtualization including 'call home' service and support from one source. In addition to the already established models, DataCore will showcase at the forum its second product generation SVA vNext. It combines the new Fujitsu PRIMERGY RX2560 M1 server generation with SANsymphony-V10 PSP4, making it more powerful in use.

        The partners are working together to meet the increasing demand for hyper-converged systems by providing a easy to deploy and use HCA. The Fujitsu DataCore HCA is a preconfigured 'Ready to Run' Appliance with integrated storage and the latest DataCore Virtual SAN software for data management, HA and optimised performance. The HCA is designed for SMBs and is a cost-effective solution for Hyper-V and VDI workloads, file and database services as well as iSCSI storage for external applications. Both new appliance solutions are expected to be available to the market by end of the year.

        SAP HANA-certification for SANsymphony-V with Fujitsu ETERNUS storage
        SAP has certified SANsymphony-V as the first software-based storage virtualization solution for SAP HANA. DataCore provides the essential enterprise storage functionality needed to support the real-time applications made possible by the SAP HANA platform. In SAP Integration and Certification Center (SAP ICC) the DataCore software has been successfully tested with Fujitsu ETERNUS DX storage. With DataCore´s hardware independent Software-Defined Storage approach, SAP HANA users can now expand existing storage infrastructures with the Fujitsu / DataCore combination to achieve an optimal price / performance ratio with the latest hardware and software technology in their SAP HANA environment.
        "The cooperation between DataCore and Fujitsu brings together leading technologies delivering tested, preconfigured and easy-to-integrate solutions with reliable support from a single source to our customers. The SAP HANA certification for SANsymphony-V now offers Fujitsu partners and resellers the opportunity to provide SAP HANA-users with effective storage solutions, regardless of which storage is being used today," says Stefan von Dreusche, director Central Europe, DataCore.

        DataCore Adaptive Parallel I/O Software and Universal Virtual Volumes
        On its booth, DataCore expands on the new DataCore Adaptive Parallel I/O Software. This technology allows for an adaptive, parallel I/O processing in multi core processors. Especially needed in workload-intensive data processing in OLTP, real-time analysis, business intelligence and datawarehouse systems as well as SQL, SAP and Oracle databases,  users will benefit from performance multipliers. With Parallel I/O technology, VMs can be packed more densely on hyper-converged systems and thus savings can be realised at a previously unmatched level. Another enhancement to DataCore's software-defined storage platform is the introduction of universal Virtual Volume (VVOL) support. VMware administrators can now deploy virtual drives via the vSphere interface from any storage hardware (disk subsystems, SSD arrays, DAS etc.) without bothering their storage administrator, even if they do not support VMware's VVOL.

        http://www.storagenewsletter.com/rubriques/systems-raid-nas-san/datacore-and-fujitsu-jointly-developed-storage-virtualization-and-hyper-converged-appliance/

        Sunday 1 November 2015

        DataCore SANsymphony-V wins Reader’s Choice Award for Best Software Defined Storage Solution



        The readers of Storage-Insider and IT business have decided: DataCore wins the Reader's Choice Award for Best Software Defined Storage. In the readers' vote of Vogel IT Media DataCore prevailed with its SANsymphony-V software in the final against solutions from Dell, IBM  and FalconStor. The Platinum Award was presented at a gala event on 29 October.





        "We are particularly pleased and proud to receive this award because this choice was made by the readers of some of the most important publications in our industry. IT professionals have selected SANsymphony-V as the best Software Defined Storage solution", Stefan of Dreusche, Director Central Europe at DataCore Software says.


        http://datacore-speicher-virtualisierung.blogspot.com/2015/10/datacore-gewinnt-readers-choice-award.html

        Thursday 22 October 2015

        Virtualization Review: Back to the Future in Virtualization and Storage – A Real Comeback, Parallel I/O by DataCore

        “It's a real breakthrough, enabled by folks at DataCore who remember what we were working on in tech a couple of decades back.”
        If you're on social media this week, you've probably had your fill of references to Back to the Future, the 1980s scifi comedy much beloved by those of us who are now in our 50s, and the many generations of video watchers who have rented, downloaded or streamed the film since. The nerds point out that the future depicted in the movie, as signified by the date on the time machine clock in the dashboard of a DeLorean, is Oct. 21, 2015. That's today, as I write this piece…

        Legacy Storage Is Not the Problem
        If you stick with x86 and virtualization, you may be concerned about the challenges of achieving decent throughput and application performance, which your hypervisor vendor has lately been blaming on legacy storage. That is usually a groundless accusation. The problem is typically located above the storage infrastructure in the I/O path; somewhere at the hypervisor and application software operations layer.
        To put it simply, hypervisor-based computing is the last expression of sequentially-executing workload optimized for unicore processors introduced by Intel and others in the late 70s and early 80s. Unicore processors with their doubling transistor counts every 24 months (Moore's Law) and their doubling clock speeds every 18 months (House's Hypothesis) created the PC revolution and defined the architecture of the servers we use today. All applications were written to execute sequentially, with some interesting time slicing created to give the appearance of concurrency and multi-threading.
        This model is now reaching end of life. We ran out of clock speed improvements in the early 2000s and unicore chips became multicore chips with no real clock speed improvements. Basically, we're back to a situation that confronted us way back in the 70s and 80s, when everyone was working on parallel computing architectures to gang together many low performance CPUs for faster execution.


        A Parallel Comeback
        Those efforts ground to a halt with unicore's success, but now, with innovations from oldsters who remember parallel, they're making a comeback. As soon as Storage Performance Council audits some results, I'll have a story to tell you about parallel I/O and the dramatic improvements in performance and cost that it brings to storage in virtual server environments.
        It's a real breakthrough, enabled by folks at DataCore who remember what we were working on in tech a couple of decades back.

        Thursday 15 October 2015

        VMworld Europe 2015: DataCore Showcases Universal VMware VVOL Support and Revolutionary Parallel I/O Software

        New Hyper-Converged Reference Architectures, VM-aware VVOL Provisioning, Rapid vSphere Deployment Wizards and Virtual Server Performance Breakthroughs Also on Display 


        At Barcelona, Spain, DataCore Software, a leader in Software-Defined Storage and Hyper-converged Virtual SANs, is showcasing its adaptive parallel I/O software to VMware customers and partners at VMworld Europe 2015. The revolutionary software technology uniquely harnesses today’s multi-core processing systems to maximize server consolidation, cost savings and application productivity by eliminating the major bottleneck holding back the IT industry – I/O performance. DataCore will also use the backdrop of VMworld Europe 2015 to debut “Proven Design” reference architectures, a powerful vSphere deployment wizard for hyper-converged virtual SANs and the next update of SANsymphony™-V Software-Defined Storage which extends vSphere Virtual Volume (VVOLs) support to new and already installed flash and disk-based storage systems lacking this powerful capability.
        "The combination of ever-denser multi-core processors with efficient CPU/memory designs and DataCore’s adaptive parallel I/O software creates a new class of storage servers and hyper-converged systems that change the math of storage performance...and not by just a fraction,” said DataCore Chairman Ziya Aral. “As we begin to publish ongoing real-world performance benchmarks in the very near future, the impact of this breakthrough will become very clear."

        At booth #S118, DataCore’s technical staff will discuss the state-of-the-art techniques used to accelerate performance and achieve much greater VM densities needed to respond to the demanding I/O needs of enterprise-class, tier-1 applications. DataCore will highlight performance optimizations for intense data processing and I/O workloads found in online transaction processing (OLTP) systems, real-time analytics, business intelligence and data warehouses. These breakthroughs have proven most valuable in the mission-critical lines of business applications based on Microsoft SQL Server, SAP and Oracle databases that are at the heart of every major enterprise.
        Universal Virtual Volumes: Extends VMware’s VVOL Benefits to Storage Systems that do not Support it
        Many VMware administrators crave the power and fine-grain control promised by vSphere Virtual Volumes (VVOLs). However, most current storage arrays and systems do not support it. Manufacturers simply cannot afford to retrofit equipment with the new VM-aware interface.
        DataCore offers these customers the chance to benefit from VVOLs on EMC, IBM, HDS, NetApp and other popular storage systems and all flash arrays simply by layering SANsymphony™-V storage virtualization software in front of them. The same is true for direct-attached storage (DAS) pooled by the DataCore™ Hyper-converged Virtual SAN.
        Now, vSphere administrators can self-provision virtual volumes from virtual storage pools -- they specify the capacity and class of service without having to know anything about the hardware.
        Other announcements and innovations important to VMware customers and partners will also be featured by DataCore at VMworld Europe. These include:
        ·        Hyper-converged software solutions for enterprise applications and high-end OLTP workloads utilizing DataCore™ Adaptive Parallel I/O software
        ·        New “Proven Design” reference architectures for Lenovo, Dell, Huawei, Fujitsu and Cisco servers spanning high-end, midrange and smaller configurations
        ·        A worldwide partnership with Curvature to provide users a novel procurement and lifecycle model for storage products, data services and centralized management that is cost-disruptive
        ·        vSphere Deployment Wizard to quickly roll out DataCore™ Hyper-converged Virtual SAN software on ESXi clusters
        ·        Stretch cluster capabilities ideal for splitting hyper-converged systems over metro distances
        ·        Breakout Session: DataCore will discuss the topics of Software-Defined Storage and application virtualization in the Solutions Exchange Theatre.


        Tuesday 6 October 2015

        @IPEXPO: DataCore Positions Adaptive Parallel I/O Software; Eliminates Bottlenecks and Redefines Price Performance.

        Today at IP Expo, DataCore Software, a leader in Software-Defined Storage and Hyper-converged Virtual SANs, previewed its Adaptive Parallel I/O software, which is included as part of its SANsymphony-V™ platform. The software harnesses the untapped power of today’s multi-core processing systems and efficient CPU memory to create a new class of storage servers and hyper converged systems. This combination supports much greater Virtual Machine densities and consolidation savings; it adaptively allocates available resources as required and thus overwhelmingly responds to the dynamic and demanding real-world I/O needs of enterprise-class, tier-1 applications. For IP Expo attendees what Adaptive Parallel I/O sets the stage to deliver is elimination of bottlenecks that effect their virtual environments each processing day in their online transaction processing (OLTP) systems, real-time analytics, business intelligence and data warehouses based on Microsoft SQL Server, SAP and Oracle databases.
        "With 75% of all applications running virtually by the end of 2016, it really is a perfect storm and IP Expo is a perfect venue to position the impact of this revolutionary software," said DataCore Chairman, Ziya Aral. "DataCore’s parallel I/Osoftware is set to radically change the economics of storage performance. Stay tuned this month as we showcase our real-world performance and benchmark results.”
        DataCore representatives will be on hand to discuss the implications of the software throughout the show. In addition, DataCore will use the show to highlight real-life customer experiences on the impact of using DataCore’s SANsymphony-V software. These real-world findings, compiled by independent research company, TechValidate, show some startling results. Customers experienced up 10 times faster performance, 100% less storage downtime and saved 90% of time previously allocated to routine storage tasks. To help monitor and track such efficiencies at the show, DataCore will be running draws for Apple Sports Watches in DataCore signature colours so that resource and time savings can be tracked and assets can be sweated.
        Also on show will be some of the local UK DataCore case studies that have been released in September, including Elddis Caravans, OGN Ltd, Uplands Community College, Bradford Grammar and the University of Birmingham.
        To be one of the first journalists and analysts to receive the performance figures for Parallel I/O later in the month, in the first instance please contact smunday@kprgobal.com or visit the stand to register your interest.

        Tuesday 22 September 2015

        Virtualisation Review: Storage Virtualisation and the Question of Balance - Parallel I/O to the Rescue

        Dan's Take: It's Time to Consider Parallel I/O

        "DataCore has been working for quite some time on parallel storage processing technology that can utilize  excess processing capability without also creating islands of storage technology. When Lenovo came to DataCore with a new, highly-parallel hardware design and was looking for a way to make it perform well, DataCore's software technology came to mind. DataCore made it possible for Lenovo's systems to dynamically use their excess processing capacity to accelerate virtualized storage environments. The preliminary testing I've seen is very impressive and shows a significant reduction in cost, while also showing improved performance. I can hardly wait to see the benchmark results when they're audited and released."

        Focusing too much on processors leads to problems.

        The storage virtualization industry is repeating an error it made long ago in the early days of industry standard x86 system: a focus on processing performance to the exclusion of other factors of balanced system design.
        Let's take a stroll down memory lane and then look at the problems storage virtualization is revealing in today's industry standard systems.
        Balanced Architectures
        Balanced system design is where system resources such as processing, memory, networking and storage utilization are consumed at about the same rate. That is, there are enough resources in each category so that when the target workload is imposed upon the system, one resource doesn't run out while others still have capacity to do more work.
        The type of workload, of course, has a great deal to do with how system architectures should be balanced. A technical application might use a great deal of processing and memory, but may not use networking and storage at an equal level. A database application, on the other hand, might use less processing but more memory and storage. A service oriented architecture application might use a great deal of processing and networking power, but less storage and memory than the other types of workloads.

        A properly designed system can do more work at less cost than unbalanced systems. In short, systems having an excess of processing capability when compared to other system resources might do quite a bit less work at a higher overall system price than a system that's better balanced.

        Mainframes to x86 Systems
        Mainframe and midrange system designers worked very hard to design systems for each type of workload. Some systems offered large amounts of processing and memory capacity. Others offered more networking or storage capacity.
        Eventually, Intel and its partners and competitors broke through the door of the enterprise data center with systems based on high-performance microprocessors. The processor benchmark data for these systems was impressive. The rest of the system, however, often was built using the lowest cost, off-the-shelf components.
        Enterprise IT decision makers often selected systems based upon a low initial price without considering balanced design or overall cost of operation. We've seen the impact this thinking has had on the market. Systems designed with expensive error correcting memory, parallel networking and storage interconnects often lose out to low cost systems having none of those "mainframe-like" enhancements.
        This means that if we walked down a row of systems in a typical datacenter, we'd see systems having under-utilized processing power trying to drive work through configurations having insufficient memory and/or networking and storage bandwidth.
        To address performance problems, enterprise IT decision makers often just purchase larger systems, even though the original systems have enough processing power; an unbalanced storage architecture is the problem.

        Enter Storage and Networking Virtualization
        As industry standard systems become virtualized environments, the industry is seeing system utilization and balance come to the forefront again. Virtualization technology takes advantage of excess processing, memory, storage and networking capability to create artificial environments; environments that offer important benefits.
        While virtual processing technology is making more use of industry standard systems' excess capacity to create benefits, other forms of virtualization are stressing systems in unexpected ways.
        Storage virtualization technology often uses system processing and memory to create benefits such as deduplication, compression, and highly available, replicated storage environments. Rather than to put this storage-focused processing load on the main systems, some suppliers push this work onto their own proprietary storage servers.
        While this approach offers benefits, it also means that the data center becomes multiple islands of proprietary storage. It also can mean scaling up or down can be complicated or costly.
        Another point is that many industry standard operating systems do their best to serialize I/O; that is, do one storage task at a time. This means that only a small amount of a system's processing capability is devoted to processing storage and networking requests, even if sufficient capacity exists to do more work.

        Parallel I/O to the Rescue
        If we look back to successful mainframe workloads, it's easy to see that the system architects made it possible to add storage and networking capability as needed. Multiple storage processors could be installed so that storage I/O could expand as needed to support the work. The same was true of network processors; many industry standard system designs have a great deal of processing power, but the software they're hosting doesn't assign excess capacity to storage or network tasks, due to the design of the operating systems.
        DataCore has been working for quite some time on parallel storage processing technology that can utilize  excess processing capability without also creating islands of storage technology. When Lenovo came to DataCore with a new, highly-parallel hardware design and was looking for a way to make it perform well, DataCore's software technology came to mind. DataCore made it possible for Lenovo's systems to dynamically use their excess processing capacity to accelerate virtualized storage environments. The preliminary testing I've seen is very impressive and shows a significant reduction in cost, while also showing improved performance. I can hardly wait to see the benchmark results when they're audited and released.

        Dan's Take: It's Time to Consider Parallel I/O
        In my article "The Limitations of Appliance Servers," I pointed out that we've just about reached the end of deploying a special-purpose appliance for each and every function. The "herd-o'-servers" approach to computing has become too complex and too costly to manage. I would point to the emergence of "hyperconverged" systems in which functions are being brought back into the system as a case in point.
        Virtual systems need virtual storage. Virtual storage needs access to processing, memory and networking capability to be effective. DataCore appears to have the technology to make this all work.

        About the Author
        Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

        Monday 21 September 2015

        ComputerWeekly: Bradford Grammar School graduates from Falconstor and Starwind to DataCore for software-defined storage

        By Anthony Adshead: http://www.computerweekly.com/news/4500253781/Bradford-Grammar-School-graduates-to-DataCore-for-software-defined-storage
        Bradford Grammar School has implemented DataCore storage software in front of DotHill arrays in a move that has seen it adopt an entirely software-defined storage environment to gain advanced functionality while cutting costs on expensive SAN hardware.
        The deployment is the conclusion of a path that has seen it move from IBM storage hardware with Falconstor; and then Starwinds storage virtualisation products to DotHill arrays, completely managed by DataCore storage software.
        ...Bradford Grammar School deployed Falconstor seven years ago to gain replication functionality between IBM and DotHill arrays at the Bradford site. But Falconstor eventually proved expensive, as the school had to pay increased licence fees as capacity grew, said network manager Simon Thompson.
        ...From here the school moved to Starwind Virtual SAN software, which didn't charge according to storage capacity under its management. But after two years and a forced upgrade, it ran into problems that knocked outreplication and made data for virtual machines inaccessible, said Thompson.
        ...So, this year the school deployed DataCore SANsymphony version 10.1 on two Dell servers. These act as a software-defined storage front end, to two DotHill 3430 SANs with synchronous replication between them; and mirroring to a hosted disaster recovery site in the centre of Bradford.
        Thompson said: “DataCore is doing clever stuff that DotHill can't do, or stuff that they can do but doing it in a better way. We can replicate data off-site in real time, which we couldn't do previously. We needed to replicate everything every 12 hours.
        The use of automated storage tiering functionality in DataCore has seen the school deploy flash storage in the DotHill arrays. DataCore moves frequently used data to flash so it can be accessed rapidly.
        Thompson said: “The benefits are that it works. It replicates, it mirrors and the contrast in performance is like night and day compared to before.”

        Tuesday 8 September 2015

        VMworld 2015: DataCore Unveils Revolutionary Parallel I/O Software; Proven Designs, "Less is More" Hyperconverged...

        VMworld 2015 News Roundup – Slideshow of Top Stories
        This week virtualization giant VMware (VMW) held its annual VMworld customer conference in San Francisco, and as always there was no shortage of virtualization-centric news from partner companies. Since reading pages and pages of press releases is no fun for anyone, we decided to compile some of the biggest announcements going on at this year’s show.

        DataCore Unveils Parallel I/O Software
        Software-defined storae vendor DataCore Software unveiled its new parallel I/O software at VMworld, which was designed to help users eliminate bottlenecks associated with running multi-core processing systems. The company also announced a new worldwide partnership with Curvature to provide users with a procurement and lifecycle model for their storage products, data services and centralized management.
        Read more here.




        Why Parallel I/O Software and Moore’s Law Enable Virtualization and Software-Defined Data Centers to Achieve Their Potential

        VirtualizationReview - Hyperconvergence: Hype and Promise
        The field is evolving as lower-cost options start to emerge.
        …Plus, the latest innovation from DataCore -- something called Parallel I/O that I'll be writing about in greater detail in my next column -- promises to convert that Curvature gear (or any other hardware platform with which DataCore's SDS is used) into the fastest storage infrastructure  on the planet -- bar none. Whether this new technology from DataCore is used with new gear, used gear, or to build HCI appliances, it rocks. More later.

        SiliconAngle: Back to basics: Why we need hardware-agnostic storage | #VMworld
        In a world full of hyper this and flash that, George Teixeira, president and CEO of DataCore Software Corp., explained how going back to to the basics will improve enterprise-level storage solutions.
        Teixeira and Dustin Fennell, VP and CIO of EPIC Management, LP, sat down with Dave Vellante on theCUBE from the SiliconANGLE Media team at VMworld 2015 to discuss the evolution of architecture and the need to move toward hardware-agnostic storage solutions.

        VMworld the Cube: Video Interview on DataCore and Parallel I/O: https://www.youtube.com/watch?t=16&v=wH6Um_wUxZE

        IT-Director on VMworld 2015: DataCore Unveils Revolutionary Parallel I/O Software
        DataCore shows its hyper-converged 'less is more' architecture

        DataCore Launches Proven Design reference Architecture Blueprints for Server Vendors
        Lenovo, Dell, Cisco and HP:


        Virtualization World: DataCore unveils 'revolutionary' parallel I/O software

        More Tweets from the show:

        Make any storage or Flash #VVOL compatible with our #Software-defined Storage Stack #SSD #virtualization #VMworld http://www.datacore.com 




        Check out our latest pictures from the show and tweets live from VMworld at: https://twitter.com/datacore




        #VMworld DataCore Parallel IO Software is the 'Killer App' for #virtualization & #Hyperconverged systems...stop by booth 835 pic.twitter.com/IbcTaTmfpv

        Great to see the crowds at #VMworld learning more about DataCore’s Parallel IO, #VSAN, Hyperconverged & Software-defined Storage pic.twitter.com/chCZZ7H4x3

        VMworld 2015: DataCore Unveils Revolutionary Parallel I/O Software
        New Hyper-Converged Reference Architectures, Real World VMware User Case Studies and Virtual Server Performance Breakthroughs Also on Display



        SAN FRANCISCO, CA August 31, 2015 DataCore Software, a leader in Software-Defined Storage, will use the backdrop of VMworld 2015 to show its hyper-converged ‘less is more’ architecture. Most significantly, VMware customers and partners will see first-hand DataCore’s adaptive parallel I/O harnessing today’s multi-core processing systems to eliminate the major bottleneck holding back the IT industry – I/O performance.
        "It really is a perfect storm," said DataCore Chairman Ziya Aral. "The combination of ever-denser
        multi-core processors with efficient CPU/memory designs and DataCore’s parallel I/O software create a new class of storage servers and hyper-converged systems that change the math of storage performance in our industry...and not by just a little bit. As we publish an ever-wider array of benchmarks and real-world performance results, the real impact of this storm will become clear."

        At booth #835, DataCore’s staff of technical consultants will discuss the state-of-the-art techniques used to achieve much greater VM densities needed to respond to the demanding I/O needs of enterprise-class, tier-1 applications. DataCore will highlight performance optimizations for intense data processing and I/O workloads found in mainstream online transaction processing (OLTP) systems, real-time analytics, business intelligence and data warehouses. These breakthroughs have proven most valuable in the mission-critical lines of business applications based on Microsoft SQL Server, SAP and Oracle databases that are at the heart of every major enterprise.

        Other announcements and innovations important to VMware customers and partners will also be featured by DataCore at VMworld. These include:
        ·         Hyper-converged software solutions for enterprise applications and high-end OLTP workloads utilizing DataCore™ Adaptive Parallel I/O software
        ·         New ‘Proven Design’ reference architectures for Lenovo, Dell, Huawei, Fujitsu and Cisco servers spanning high-end, midrange and smaller configurations
        ·         A new worldwide partnership with Curvature to provide users a novel procurement and lifecycle model for storage products, data services and centralized management that is cost-disruptive
        ·         Preview of DataCore’s upcoming VVOL capabilities
        ·         Stretch cluster capabilities ideal for splitting hyper-converged systems over metro distances

        Breakout Sessions
        ·         DataCore and VMware customer case study featuring Mission Community Hospital: “Virtualizing an Application When the Vendor Says ‘No’” in the Software-Defined Data Center track -- Monday, August 31, 2015 at 12:00 p.m.

        ·         Lenovo Servers in Hyper-Converged and SAN Storage Roles” Learn how Lenovo servers are being used in place of traditional storage arrays to handle enterprise-class storage requirements in hyper-converged clusters as well as external SANs. Uncover the agility and cost savings you can realize simply by adding DataCore Software to Lenovo systems.  Two theater presentations  -- Tuesday, September 1, and Wednesday, September 2 at 3:30 p.m in the Lenovo Booth #1537
        ·         “Less is More with Hyper-Converged: When is 2>3” and “Efficiently Scaling Hyper-Converged: How to Avoid Buyers’ Remorse” Daily in the DataCore Booth #835

        About VMworld
        VMworld 2015 U.S. takes place at San Francisco’s Moscone Center from August 30 through September 3, 2015.  It is the industry's largest virtualization and cloud computing event, with more than 400 unique breakout sessions and labs, and more than 240 sponsors and exhibitors. To learn more about VMworld, please visit: www.vmworld.com