Translate

Friday 18 December 2015

Making Data Highly Available on Flash and DRAM

George Teixeira, CEO & President and Nick Connolly, Chief Scientist at DataCore Software discuss how DataCore's Software-Defined Storage solution takes advantage of flash and DRAM technologies to provide high availability and the right performance for your applications.


How Software-Defined Storage Enhances Hyper-converged Storage 
One of the fundamental requirements for virtualizing applications is shared storage. Applications can move around to different servers as long as those servers have access to the storage with the application and its data. Typically, shared storage takes place over a storage network known as a SAN. However, SANs typically run into issues in a virtual environment, so organizations are currently looking for new options. Hyper-converged infrastructure is a solution that seems well-suited to address these issues.
This following white paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of software-defined storage as a solution to provide reliable application performance and a highly available infrastructure.

Saturday 12 December 2015

DataCore achieves SAP-HANA certification, first Software-defined Storage certified to operates across multiple vendors

We are pleased to announce the certification of SANsymphony™-V with the SAP HANA® platform. DataCore™ SANsymphony-V is storage infrastructure software that operates across multiple vendors’ storage systems to deliver the performance and availability required by demanding enterprise-class applications such as SAP HANA.
What is SAP HANA?
The SAP HANA in-memory database lets organizations process vast amounts of transactional, analytical and application data in real-time using a computer’s main memory. Its platform provides libraries for predictive, planning, text processing, spatial and business analytics.
Key Challenges for SAP HANA implementation:
SAP HANA demands a storage infrastructure to process data at an unprecedented speed and has zero-tolerance for downtime. Most organizations store entire SAP HANA multi-terabyte production systems on high-performance Tier 1 storage to meet the performance required during peak processing cycles, such as “period end,” or seasonal demand spikes.  This practice presents the following challenges to IT departments:
  • Tier 1 storage is expensive to deploy and significantly impact the IT budget.
  • Tier 1 storage is limited to its physical constraints when it comes to data availability, staging, reporting, and test and development.
  • Managing multiple storage systems (existing and new) can add considerable cost and complexity; routine tasks like test/dev and reporting are difficult to manage.
Benefits of DataCore
DataCore SANsymphony-V is the first Software-defined Storage solution that is certified to operate across multiple vendors’ SAP HANA-certified storage systems to deliver the performance and availability required.  DataCore SANsymphony-V software provides the essential enterprise-class storage functionality needed to support the real-time applications offered by the SAP HANA® platform.
With DataCore, SAP HANA customers gain:
  • Choice:  Companies have the choice of using existing and/or new SAP HANA certified storage systems, with the ability to seamlessly manage and scale their data storage architectures as well as giving them more purchasing power (no vendor lock-in)
  • Performance:  Accelerate I/O with DataCore™ Adaptive Parallel I/O architecture as well as caching to take full advantage of SAP HANA. in-memory capabilities to transform transactions, analytics, text analysis, predictive and spatial processing.
  • Cost-efficiency:  DataCore reduces the amount of Tier 1 storage space needed, and makes the best use of lower cost persistent HANA-certified storage.


 DataCore SANsymphony-V infrastructure software is the only SAP HANA-certified SDS solution that can be used together with an SAP-certified storage solution from Fujitsu, Huawei, IBM, Dell, NEC, Nimble Storage, Pure Storage, Fusion-io, Violin Memory, EMC, NetApp, HP and Hitachi.

Tuesday 8 December 2015

Software Defined Storage meets Parallel I/O; The impact on Hyperconvergence




                    George Crump Storage Switzerland

http://storageswiss.com/2015/12/01/software-defined-storage-meets-parallel-io/

In terms of storage performance, the actual drive  is no longer the bottleneck. Thanks to flash storage, attention has turned to the hardware and software  that surrounds them, especially the capabilities of the CPU that drives the storage software. The importance of CPU power is evidenced by the increase in overall storage system performance when an all-flash array vendor releases a new storage system. The flash media in that system doesn’t change, but overall performance does increase. But that increase in performance is not as optimal as it should be. The lack of achieving optimal performance is a result of storage software not taking advantage of the parallel nature of the modern CPU.

Moore’s Law Becomes Moore’s Suggestion

Moore’s Law is an observation by Intel co-founder Gordon Moore. The simplified version of this law states that number of transistors  will double every two years. IT professionals assumed that meant that the CPU they buy would get significantly faster every two years or so. Traditionally, this meant that the clock speed of the processor would increase, but recently Intel has hit a wall because increasing clock speeds also led to increased power consumption and heat problems. Instead of increasing clock speed, Intel has focused on adding more cores per processor. The modern data center server has essentially become a parallel computer.

Multiple cores per processor are certainly an acceptable method of increasing performance and continuing to advance Moore’s law. Software, however, does need to be re-written to take advantage of this new parallel computing environment. Parallelization of software is required by operating systems, application software and of course storage software. The re-coding of software to make it parallel is challenging. The key is to manage I/O timing and locking, making multli-threading a storage application more difficult than a video rendering project for example. As a result, it has taken time to get to the point where the majority of operating systems and application software has some flavor of parallelism.

Lagging far behind in the effort to take full advantage of the modern processor is storage software. Most storage software, either built into the array or the new crop of software defined storage (SDS) solutions, are unable to exploit the wide availability of processing cores. They are primarily single core. As a worst case, they are only using one core per processor; at its best, they are using one core per function. If cores are thought of as workers, it is best to have all the workers available to all the tasks, rather than each worker focused on a single task.

Why Cores Matter

The importance of using cores efficiently has only recently become important. Most legacy storage systems were hard drive based, lacking advanced caching or flash media to drive performance. As a result, the need to support efficiently the multi-core environment was not as obvious as it is now that systems have a higher percentage of flash storage. The lack of multi-core performance was overshadowed by the latency of the hard disk drive. Flash and storage response time is just one side of the I/O equation. On the other side, the data center is now populated with highly dense virtual environments or, even more contentious, hyper-converged architectures. Both of these environments generate a massive amount of random I/Os that, thanks to flash, the storage system should be able to handle very quickly. The storage software is the interconnect between the I/O requester and the I/O deliverer and if it can’t efficiently support all the cores it has at its disposal then it becomes the bottleneck.

All storage systems that leverage Intel CPUs face the same challenge; how to leverage CPUs that are increasing in cores, but not in raw speed. In other words, they don’t perform a single process faster but they do perform multiple processes at the same speed simultaneously, netting in faster overall completion time, if the cores are used efficiently. Storage software needs to adapt and become multi-threaded so it can distribute I/O across these functions, taking full advantage of multiple cores.

For most vendors this may mean a complete re-write of their software, which will take time, effort and risk incompatibility with their legacy storage systems.

How Vendor’s Fake Parallel I/O

Vendors have tried several techniques to try to leverage the reality of multiple cores without specifically “parallelizing” their code. Some storage system vendors have tried to tie specific cores to specific storage processing tasks. For example, one core may handle raw inbound I/O while another handles RAID calculations. Other vendors will distribute storage processing tasks in a round robin fashion. If cores are thought of as workers, this technique treats cores as individuals instead of a team. As each task comes in each core is assigned a task, but only that core can work on that task. If it is a big task, it can’t get help from the other cores. While this technique does distribute the load, it doesn’t allow multiple workers to work on the same task at the same time. Each core has to do its own heavy lifting.

Scale-out storage systems are similar in that they leverage multiple processors within each node of the storage cluster, but that are not granular enough to assign multiple cores to the same task. They, like the systems described above, typically have a primary node that acts as a task delegator and assigns the I/O to a specific node, and that specific node handles storing the data and managing data protection.

These designs count on the I/O to come from multiple sources so that each discrete I/O stream can be processed by one of the available cores. These systems will claim very high IOPS numbers, but require multiple applications to get there. They work best in an environment that requires a million IOPS because it has ten workloads all generating 100,000 IOPS instead of an environment that has one workload that generates 1 million IOPS and no other workloads over 5,000. To some extent vendors also “game” the benchmark by varying I/O size and patterns (random vs. sequential) to achieve a desired result. The problem is this I/O  is not the same as what customers will see in their data centers.

The Impact of True Parallel I/O

True parallel I/O utilizes all the available cores across all the available processors. Instead of siloing a task to a specific core, it assigns all the available cores to all the tasks. In other words, it treats the cores as members of a team. Parallel I/O storage software works well on either type of workload environment, ten generating 100k IOPS or one generating 1 million IOPS.

Parallel I/O is a key element in powering the next generation data center because the storage processing footprint can be dramatically reduced and can match the reduced footprint of solid-state storage and server virtualization. Parallel I/O provides many benefits to the data center:


  • Full Flash Performance


  • As stated earlier, most flash systems show improved performance when more processing power is applied to the system. Correctly leveraging cores with multi-threading, delivers the same benefit without having to upgrade processing power. If the storage software is truly parallel, then the storage software can deliver better performance with less processing power, which drives costs down while increasing scalability.


    • Predictable Hyper-Converged Architectures


    • Hyper-converged architectures are increasing in popularity thanks to available processing power at the computer tier. Hypervisors do a good job of utilizing multi-core processors. The problem is that a single threaded storage software component becomes the bottleneck. Often the key element of hyper-convergence, storage software, is isolated to one core per hyper-converged node. These cores can be overwhelmed if there is a performance spike leading to inconsistent performance that could impact the user experience. Also to service many VMs and critical business applications, they typically need to throw more and more nodes at the problem, impacting the productivity and cost saving benefits derived from consolidating more workload on fewer servers. A storage software solution that is parallel can leverage or share multiple cores in each node. The result is more virtual machines per host, less nodes to manage and more consistent storage I/O performance even under load.


      • Scale Up Databases


      • While they don’t get the hype of modern NoSQL databases, traditional, scale up databases (e.g., Oracle, Microsoft SQL) are still at the heart of most organizations. Because the I/O stream is from a single application, they don’t generate enough independent parallel I/O so it can be distributed to specific cores. The parallel I/O software’s ability to make multiple cores act as one is critical for this type of environment. It allows scale up environments to scale further than ever.

        Conclusion

        The data center is increasingly becoming denser; more virtual machines are stacked on virtual hosts, legacy applications are expected to support more users per server, and more IOPS are expected from the storage infrastructure. While the storage infrastructure now has the right storage media (flash) in place to support the consolidation of the data center, the storage software needs to support the available compute power. The problem is that compute power is now delivered via multiple cores per processor instead of a single processor. Storage software that has parallel I/O will be able to take full advantage of the processor reality and support these dense architectures with a storage infrastructure that is equally dense.




        Monday 7 December 2015

        DataCore Certifies Universal VVols - Brings VMware VVols Benefits to Existing Storage and to Non-VVol Certified Storage

        Universal Virtual Volumes: Extends VMware’s VVOL Benefits to Storage Systems that do not Support it
        Many VMware administrators crave the power and fine-grain control promised by vSphere Virtual Volumes (VVOLs). However, most current storage arrays and systems do not support it. Manufacturers simply cannot afford to retrofit equipment with the new VM-aware interface.
        DataCore offers these customers the chance to benefit from VVOLs on EMC, IBM, HDS, NetApp and other popular storage systems and all flash arrays simply by layering SANsymphony™-V storage virtualization software in front of them. The same is true for direct-attached storage (DAS) pooled by the DataCore™ Hyper-converged Virtual SAN.
        Now, vSphere administrators can self-provision virtual volumes from virtual storage pools -- they specify the capacity and class of service without having to know anything about the hardware.





























        Friday 4 December 2015

        DataCore and Fujitsu Announce New Hyper-Converged Appliances, SAP HANA-certification, Adaptive Parallel I/O Software and Universal Virtual Volumes (VVOLs)

         DataCore Software Corporation has unveiled second generation Storage Virtualization Appliance (SVA) and a new Hyper-Converged Appliance (HCA), both jointly developed and supported with Fujitsu Ltd. and available for shipment by end of the year.


        New opportunities are available for Fujitsu customers and partners as a result of DataCore achieving certification of SANsymphony-V storage virtualization for SAP HANA on Fujitsu ETERNUS storage.
        "Our partnership with DataCore continues to provide many benefits to our growing base of customers. Our joint solutions combine industry leading Fujitsu hardware platforms and DataCore software into SVA and HCA that help customers easily deploy and manage their enterprise solutions to run their core business applications. With today's certification, DataCore officially supports SAP HANA on Fujitsu ETERNUS storage and adds its software-defined storage features, performance acceleration and HA capabilities  to complement our SAP portfolio in an ideal way," says Jörg Brünig, senior director, Fujitsu.

        With SVA, Fujitsu and DataCore provide a series of tested turnkey appliances for SAN virtualization including 'call home' service and support from one source. In addition to the already established models, DataCore will showcase at the forum its second product generation SVA vNext. It combines the new Fujitsu PRIMERGY RX2560 M1 server generation with SANsymphony-V10 PSP4, making it more powerful in use.

        The partners are working together to meet the increasing demand for hyper-converged systems by providing a easy to deploy and use HCA. The Fujitsu DataCore HCA is a preconfigured 'Ready to Run' Appliance with integrated storage and the latest DataCore Virtual SAN software for data management, HA and optimised performance. The HCA is designed for SMBs and is a cost-effective solution for Hyper-V and VDI workloads, file and database services as well as iSCSI storage for external applications. Both new appliance solutions are expected to be available to the market by end of the year.

        SAP HANA-certification for SANsymphony-V with Fujitsu ETERNUS storage
        SAP has certified SANsymphony-V as the first software-based storage virtualization solution for SAP HANA. DataCore provides the essential enterprise storage functionality needed to support the real-time applications made possible by the SAP HANA platform. In SAP Integration and Certification Center (SAP ICC) the DataCore software has been successfully tested with Fujitsu ETERNUS DX storage. With DataCore´s hardware independent Software-Defined Storage approach, SAP HANA users can now expand existing storage infrastructures with the Fujitsu / DataCore combination to achieve an optimal price / performance ratio with the latest hardware and software technology in their SAP HANA environment.
        "The cooperation between DataCore and Fujitsu brings together leading technologies delivering tested, preconfigured and easy-to-integrate solutions with reliable support from a single source to our customers. The SAP HANA certification for SANsymphony-V now offers Fujitsu partners and resellers the opportunity to provide SAP HANA-users with effective storage solutions, regardless of which storage is being used today," says Stefan von Dreusche, director Central Europe, DataCore.

        DataCore Adaptive Parallel I/O Software and Universal Virtual Volumes
        On its booth, DataCore expands on the new DataCore Adaptive Parallel I/O Software. This technology allows for an adaptive, parallel I/O processing in multi core processors. Especially needed in workload-intensive data processing in OLTP, real-time analysis, business intelligence and datawarehouse systems as well as SQL, SAP and Oracle databases,  users will benefit from performance multipliers. With Parallel I/O technology, VMs can be packed more densely on hyper-converged systems and thus savings can be realised at a previously unmatched level. Another enhancement to DataCore's software-defined storage platform is the introduction of universal Virtual Volume (VVOL) support. VMware administrators can now deploy virtual drives via the vSphere interface from any storage hardware (disk subsystems, SSD arrays, DAS etc.) without bothering their storage administrator, even if they do not support VMware's VVOL.

        http://www.storagenewsletter.com/rubriques/systems-raid-nas-san/datacore-and-fujitsu-jointly-developed-storage-virtualization-and-hyper-converged-appliance/