Translate

Friday, 29 January 2016

DataCore’s Parallel I/O Software Runs Enterprise Storage and Application Workloads at $0.08 per SPC-1 IO/s


While driving fastest response times ever reported
DataCore Software Corporation announced a world record for price-performance using the industry's recognised and peer reviewed storage benchmark, the Storage Performance Council's SPC-1.


Thanks largely to its parallel I/O software that harnesses the untapped power of multi-core processors, this achievement places the company with an audited SPC-1 price-performance of $0.08 or 0.05 pence per SPC-1 IO/s [1] and as the clear-cut leader in SPC-1 price-performance overall. DataCore certified its results on a powerful but compact 2U Lenovo's system x3650 M5 multi-core server featuring Xeon E5-2600 v3 series processors with a mix of flash SSD and disk storage. On this same platform, the company also recorded the fastest response times ever attained, even compared to the many all-flash arrays and multi-million dollar name brand systems that have published SPC-1 results.

"With these first certified results, DataCore has put a stake in the ground to demonstrate our parallel I/O performance and hyper-converged capability. For us, this is just the beginning. Look for future benchmarking to incorporate multi-node HA configurations and to demonstrate I/O originating from both inside and outside the servers - the future for all storage systems," stated Ziya Aral, chairman, DataCore. "We have only just begun to show the potential of our inherently parallel I/O architecture."

Hyper-converged system handles compute, parallel I/O processing and storage workloads
Notably, the record-breaking price-performance results were achieved on a hyper-converged solution capable of servicing both enterprise storage requirements and demanding database and transaction processing application workloads - all running together on the same platform.

Hyper-converged systems must demonstrate that they can cost-efficiently handle combined enterprise storage and application workloads. Unlike SPC-1 results that characterise only external storage systems excluding the servers used to generate the load, company's $0.08 per SPC-1 IO/s result includes the cost to generate the workload and, therefore, encompasses the total cost and end-to-end requirements for running the enterprise application.

"We'd like to see others, like Nutanix and SimpliVity, publish SPC-1 benchmark numbers to reveal how they fare against our record-breaking SPC-1 Price-Performance results. Then customers can clearly assess the cost implications of these alternatives," challenges George Teixeira, CEO, DataCore. "There's been much speculation about how these systems perform under I/O-intensive workloads generated by mission-critical enterprise applications. Using the peer-reviewed SPC-1 full disclosure process provides an objective frame of reference for making comparisons prior to any buying decisions."

The Results: Record-breaking price-performance for both storage and hyper-converged
For the benchmark, the company used an off-the-shelf, hyper-converged system targeting enterprise OLTP and latency-sensitive database applications rated for 459,290.87 SPC-1 IO/s, with a total cost for hardware, software and three years of support coming in at $38,400.29, making it the top SPC-1 price-performance result of $0.08 per SPC-1 IO/s. That is a 300% improvement over the previous record of $0.24 per SPC-1 IO/s [2] attained by the Infortrend EonStor DS 3024B and less than 25% of the cost of popular top-of-the-line storage arrays including EMC VNX 8000 [3], NetApp EF560 [4] All Flash Array, Dell Storage SC4020 [5], and HP 3PAR StoreServ 7400 [6].

The company and IBM are the only companies to benchmark a hyper-converged system where the SPC-1 applications and storage workloads they generate are both serviced on the same platform. This means that company's $38,400.29 price includes not only the storage components, but all of the host server resources and the hypervisor software needed to run the enterprise database/OLTP workloads generated by the benchmark. For comparison, the only other hyper-converged system with publicly reported SPC-1 results is an IBM Power 780 Server. Their SPC-1 price-performance result is $4.56 per SPC-1 IO/s [7]. That system attained 780,081.02 SPC-1 IO/s at a total price of $3,557,709.00, or roughly 91 times more costly than the firm's solution.

Company's adaptive parallel I/O technology exploits power of multi-core CPUs
The price-performance ratings can be attributed in major part to company's Adaptive Parallel I/O techniques intrinsic to the design of the SANsymphony-V software-defined storage services platform. The firm executes many independent I/O streams simultaneously across multiple CPU cores, reducing the latency to service and process I/Os by taking full advantage of cost-effective but dense, multi-core servers such as the Lenovo System X machines. Competing products serialise I/O limiting their throughput and slowing their response times.


"Lenovo initially approached us to run the demanding SPC-1 enterprise workload benchmark. They wanted proof we could fully harness the power of their multi-core servers given the abundance of unsubstantiated performance claims circulating in hyper-converged circles," continued Teixeira. "They soon realized with parallel I/O, we had a rocket ship in our hands."

"Lenovo is excited to partner with DataCore to disrupt the storage marketplace providing customers the best price and performance in the industry" stated Chris Frey, VP and GM, Lenovo North America. "DataCore's industry-leading SPC-1 results on Lenovo System x demonstrate the performance, innovation and reliability that Lenovo is delivering to meet the growing storage needs to our customers."

SPC-1 Benchmark - Tested/priced configuration
The SPC-1 performance testing is designed to demonstrate a system's performance capabilities for business-critical enterprise workloads typically found in database and transaction processing environments. The audited configuration that was tested and priced includes SANsymphony-V parallel I/O software on a Lenovo System x3650 M5 multi-core server featuring Xeon E5-2600 v3 series processors running Windows Server, equipped with 16 SSDs and 8 HDD drives. The company also supports Hyper-V, VMware ESXi, Linux KVM and other hypervisor-based solutions. It can also run directly on Windows servers when server virtualisation is not appropriate.

'Less is more' with hyper-converged virtual SAN and software-defined storage
SANsymphony-V software reduces the I/O limitations and bottlenecks that restrict the number of VMs and workloads that can be consolidated on server and hyper-converged platforms. The software enables industry-standard x86 servers to gain the essential storage functionality needed to meet today's demanding tier-1 business application requirements. It runs on off-the-shelf servers and works infrastructure-wide across all types of storage (flash, disk and cloud) to automate and optimise performance and resource allocation.

Company's parallel I/O software takes advantage of today's advanced generation of multi-core server platforms - allowing companies to increase productivity and server consolidation savings by supporting the I/O needed to run more VMs and more application workloads faster and at a much lower cost.

Monday, 11 January 2016

2016: The Parallel IO Revolution is Underway, Servers are the New Storage










By George Teixeira, CEO and President, 
DataCore Software 
Key Points Shaping DataCore’s Views in 2016 










Parallel I/O software and multicore technology will transform IT productivity in 2016The melding of the server and storage worlds along with advances in parallel I/O software will revolutionize business productivity and transform our industry. Similar to server virtualization, the impact will be dramatic. Here are some of the key points shaping DataCore's views in 2016:

1. Servers are the new storage
A major transformation is underway as traditional storage systems are being replaced by commodity servers and software-defined solutions that can harness their power to solve the growing storage problem. Simply put, storage and data services will inevitably become yet another 'application workload' running on cost-efficient server platforms. This new wave of server-based storage systems are already having an impact. They are being marketed as server-SANs, virtual SANs, web-scale, scale-out and hyper-converged systems. However, when you look underneath the fancy marketing, they are pretty much a collection of standard off-the-shelf servers, flash cards and disk drives - the software defines their value differentiation.


Why has this change happened? Traditional storage vendors with specialized systems can no longer keep up with Moore's Law and the pace of cost savings and innovations that generic server platforms can deliver. Dell buying EMC is indicative of the change and the need to merge the server and storage worlds to remain competitive. Parallel I/O software and the ability to harness multicore server technology will be a major game-changer in 2016. In combination with software-defined storage, it will lead to a productivity revolution and establish 'servers as the new storage.'

2. Parallel I/O software and multi-core technology will revolutionize the IT world in 2016. 

The modern microprocessor universe started in the 1970's, and it along with Moore's Law drove two major paths of technology advances: the first resulted in faster more efficient uniprocessors which directly led to the PC revolution and to today's pervasive use of microprocessors in everything from smartphones to intelligent devices. The second path was parallel computing which set out to harness the power of many microprocessors. While parallel computing started with a flurry, the pace of advances were ultimately stifled by a lack of commodity parallel computing hardware, the overshadowing and rapid pace of advances in uniprocessor clock speeds that resulted from Moore's Law and most importantly by the lack of available software to do parallel work. Therefore, parallel computing for the most part remained an exotic discipline which required too much specialization for more general business use.


While faster clock speeds drove the PC revolution what went unnoticed was that the silicon vendors began to put many cores on the same platform (more transistors became more cores) and the result is that multicores are everywhere. In effect parallel processing power is now readily available, but there is still a lack of software to fully use its power. Bottom-line, the promised parallel computing revolution as a generic capability was put on hold awaiting software to advance.We are now at that critical turning point with software.
The parallel processing revolution is happening right now. DataCore recently set the new world record on price-performance and did it on a hyper-converged platform (on the Storage Performance Council's peer reviewed SPC-1 benchmark). DataCore also reported the best performance per footprint and the fastest response times ever. Bottom-line, today's multicore servers and software can 'do far more with less' and dramatically change the economics and productivity one can achieve.

Parallel I/O software will overcome the I/O bottleneck holding back our industry. It harnesses the power of multicores to dramatically increase productivity - and as a result it will revolutionize the industry.

3. Dramatic performance and productivity gains will transform hyper-converged and software-defined storage; get ready for a giant leap forward in 2016 

Finally, the hype around hyper-converged has continued to grow. From the marketing, one would believe it is the panacea to all problems. However, consumers and enterprises are realizing that they create new silos to manage and there are multiple limitations with the current offerings, particularly in terms of the scale and performance of those solutions to effectively handle enterprises-class workloads. As 2016 progresses, many customers will find themselves looking for solutions that can bring the ease of use benefits but also be easily integrated within company infrastructures with both existing investments and future technologies. Users are looking forward to the next stage of hyper-converged technology deployments where they don't have to sacrifice performance and interoperability with the rest of their investments.

Only a software-defined storage layer combined with parallel I/O software can effectively manage the power of multicore servers, migrate and manage data across the entire storage infrastructure, incorporate flash and hyper-converged systems without adding extra silos, and effectively utilize data stored anywhere in the enterprise or in the cloud. By untapping the power within standard multi-core servers, data infrastructures will realize tremendous consolidation and productivity benefits from parallel I/O technologies.

The impact is dramatic, it translates into much greater cost savings and productivity by allowing a new level of consolidation far beyond server virtualization alone and enabling systems to truly 'do more with less.' Application performance, enterprise workloads and greater consolidation densities on virtual platforms won't have to be held back by the growing gap between compute and I/O.

This combination of powerful software and servers will drive greater functionality, more automation, and comprehensive services to productively manage and store data across the entire data infrastructure. It will lead to a new era where "servers are the new storage" and the benefits of multi-core parallel processing can be applied universally. These advances which are already before us are key to solving the problems caused by slow I/O and inadequate response times that have been responsible for holding back application workload performance and cost savings from consolidation. These advances - multicore processing, parallel I/O and software-defined storage- collectively, are fundamental to achieving the next giant leap forward in business productivity. 

SearchStorage: Multicore processors mark next era of storage…Tick-Tock




Multicore processor technology represents not only next era in storage, but that everything old is new again.

From time to time, in presentations by tech vendors, one hears reference to a "tick-tock." Tick-tock is jargon describing a perceived pattern in the events that occur over a designated time frame. In recognizing such a pattern, the tick-tock narrative provides an orderly perspective on the seemingly great disorder of technological advancement, while at the same time providing a framework for predicting the future. Both make us feel like the future is less scary.

As we'll discuss in this article, multicore processors could very well be the next tick-tock for storage…

…Multicore the new tick-tock
Multicore processors have been the basis of the new tick-tock for some time now. Year after year, we are presented with CPUs offering double the number of processor cores on the same die, even though chip speeds have not increased significantly or at all…


…Unleash the power of multicore, multithreading chips
To really unlock the potential power of multicore processors and multithreading chips, we would need to get back to multiprocessing, parallel computing design.

DataCore Software is the first to revisit these concepts, which co-founder and Chairman of the Board Ziya Aral helped to pioneer in the 1980s. The company has found a way to take a user-designated portion of the logical cores available on a server and to allocate them specifically for storage I/O handling.

The technique they are using is getting increasingly granular and will eventually enable very specific processor resource allocation to the I/O processing of discreet workload. Best of all, once set, it is adaptive and self-tunes the number of multicore processors being used to handle I/O workloads.

DataCore's Storage Performance Council SPC-1 benchmark numbers are telling: they have blown the socks off of the hardware guys in terms of storage performance while reducing the cost per I/O well below the current low-cost leader -- using any and all off-the-shelf interconnects and storage devices.

We are about to enter a whole new era with a completely new tick-tock for storage -- and perhaps for the full server-network-storage stack -- based on multiprocessor architecture and engineering applied to multicore processor-driven systems.

Everything old is new again.


Sunday, 10 January 2016

Multi-Core Chips, IO processing and More; Virtualization Review Crystal Ball Predictions for 2016?


Virtualization Review’s The Infrastruggle Blog: 4 Possibly Correct Predictions for 2016

Excerpt:
…Better Use of Multi-Core Chips 
With the release of DataCore Software Parallel I/O technology, I expect to see a flood of parallel I/O woo enter the market. Parallel I/O involves the use of spare logical CPU cores ganged together into a very fast I/O processing engine to deliver phenomenal throughput improvements without much cost (you already own the multi-core processor). DataCore has paved the way to an extremely low-cost, high-performance storage tier by combining its P-I/O algorithm with its storage virtualization capabilities that include adaptive caching and interconnect load balancing. I suspect that many vendors will seek to pursue a comparable strategy, though most lack the experience in multiprocessor architecture that DataCore still has on staff.

Read Toigo’s full post and predictions at 4 Possibly Correct Predictions for 2016

1. The Zettabyte Apocalypse Will Not Come in 2016
2. Better Use of Multi-Core Chips
3. Tape Will Continue Its Comeback
4. Mainframes Are Cool Again



Monday, 4 January 2016

Star Wars, the Force and the Power of Parallel Multicore Processing: Getting More Out of Virtualized Workloads

By George Teixeira, President & CEO, DataCore Software

During the 80’s, the original Star Wars movies featured amazing future technology and were all about “the power of the Force.” The latest movie has now broken all box office records and got me thinking about how much IT and computing technology has progressed over the years but yet, there is still so much left untapped.

Yes, several of the envisioned gains have come true – many of these driven by Moore’s Law and the growing force of the microprocessor revolution. For example, server virtualization software such as VMware radically redefined consolidation savings and productivity, CPU clock speeds got faster and microprocessors became commodities used everywhere – powering PCs, laptops, smart phones and intelligent devices of all types. But the full force and promise of using many microprocessors in parallel, what is now called ‘multicores,’ still remains largely untapped and I/O continues to be the major bottleneck holding back the IT industry from achieving the next revolution in consolidation, performance and productivity.

Virtual computing is still bottlenecked by I/O. Just as city drivers can only dream about flying vehicles as gridlock haunts their morning commute, IT is left wondering if they will ever see the day when application workloads will reach light speed.

How can it be that with multi-core processing, virtualized apps, abundant RAM and large amounts of flash, you still have to deal with I/O-starved virtual machines (VMs) while many processor cores remain idle? Yes, you can run several independent workloads at once on the same server using separate CPU and memory resources, but that’s where everything begins to break down. The many workloads in operation generate concurrent I/O requests yet only one core is charged with I/O processing. This architectural limitation strangles the life out of application performance. Instead of one server doing vast quantities of work, IT is forced to add more servers and racks to deal with I/O bottlenecks – this sprawl goes against the ‘consolidation and productivity savings’ which is the basic premise and driver of virtualization.

All it takes, then, is a few VMs running simultaneously on multi-core processors churning out almost inconceivable volumes of work and you quickly overwhelm the one processor tasked with serial I/O. Instead of a flood of accomplished computing, a trickle of I/O emerges. IT is left feeling like the kids who grew up watching Star Wars who ask – where are our flying starships and when can we travel at light-speed?!

The good news is that all is not lost. DataCore has a number of bright minds hard at work to bring a revolutionary breakthrough for I/O to prime time, DataCore Parallel I/O technology lets virtualized traffic flow through without slowdown. Its unique software-defined parallel I/O architecture is needed to capitalize on today’s powerful multi-core/parallel processing infrastructure. By enlisting software to drive I/O processing across many different cores simultaneously, this eradicates I/O bottlenecks and drives a higher level of consolidation savings and productivity. The better news is that this technology is already on the market today.

Just like Star Wars has shattered the world record, check out how DataCore recently set the new world record on price-performance and on a hyperconverged system (on the Storage Performance Councils peer reviewed SPC1 benchmark). DataCore also reported the best performance per footprint and the fastest response times ever and so while the numbers do not actually reach light-speed, DataCore has lapped the field not once but multiple times. See for yourself the latest benchmark results in this article that appeared in Forbes: The Rebirth of Parallel I/O.

How? DataCore’s software actively senses the I/O load being generated by concurrent VMs. It adapts and responds dynamically by assigning the appropriate number of cores to process the input and output traffic. As a result, VM’s no longer sit idle waiting on a serial I/O thread to become available. Should the I/O load lighten, however, CPU cores are freed to do more computational work.

This not only solves the immediate performance problem facing multi-core virtualized environments, it significantly increases the VM density possible per physical server. It allows IT to do ‘far more with less.’ This means fewer servers or racks and less space, power and cooling are needed to get the work done. In effect, it achieves remarkable cost reductions through maximum utilization of CPU cores, memory and storage while fulfilling the productivity promise of virtualization.

You can read more about this in DataCore’s white paper, “Waiting on I/O: The Straw that Broke Virtualization’s Back.”