Translate

Tuesday 22 September 2015

Virtualisation Review: Storage Virtualisation and the Question of Balance - Parallel I/O to the Rescue

Dan's Take: It's Time to Consider Parallel I/O

"DataCore has been working for quite some time on parallel storage processing technology that can utilize  excess processing capability without also creating islands of storage technology. When Lenovo came to DataCore with a new, highly-parallel hardware design and was looking for a way to make it perform well, DataCore's software technology came to mind. DataCore made it possible for Lenovo's systems to dynamically use their excess processing capacity to accelerate virtualized storage environments. The preliminary testing I've seen is very impressive and shows a significant reduction in cost, while also showing improved performance. I can hardly wait to see the benchmark results when they're audited and released."

Focusing too much on processors leads to problems.

The storage virtualization industry is repeating an error it made long ago in the early days of industry standard x86 system: a focus on processing performance to the exclusion of other factors of balanced system design.
Let's take a stroll down memory lane and then look at the problems storage virtualization is revealing in today's industry standard systems.
Balanced Architectures
Balanced system design is where system resources such as processing, memory, networking and storage utilization are consumed at about the same rate. That is, there are enough resources in each category so that when the target workload is imposed upon the system, one resource doesn't run out while others still have capacity to do more work.
The type of workload, of course, has a great deal to do with how system architectures should be balanced. A technical application might use a great deal of processing and memory, but may not use networking and storage at an equal level. A database application, on the other hand, might use less processing but more memory and storage. A service oriented architecture application might use a great deal of processing and networking power, but less storage and memory than the other types of workloads.

A properly designed system can do more work at less cost than unbalanced systems. In short, systems having an excess of processing capability when compared to other system resources might do quite a bit less work at a higher overall system price than a system that's better balanced.

Mainframes to x86 Systems
Mainframe and midrange system designers worked very hard to design systems for each type of workload. Some systems offered large amounts of processing and memory capacity. Others offered more networking or storage capacity.
Eventually, Intel and its partners and competitors broke through the door of the enterprise data center with systems based on high-performance microprocessors. The processor benchmark data for these systems was impressive. The rest of the system, however, often was built using the lowest cost, off-the-shelf components.
Enterprise IT decision makers often selected systems based upon a low initial price without considering balanced design or overall cost of operation. We've seen the impact this thinking has had on the market. Systems designed with expensive error correcting memory, parallel networking and storage interconnects often lose out to low cost systems having none of those "mainframe-like" enhancements.
This means that if we walked down a row of systems in a typical datacenter, we'd see systems having under-utilized processing power trying to drive work through configurations having insufficient memory and/or networking and storage bandwidth.
To address performance problems, enterprise IT decision makers often just purchase larger systems, even though the original systems have enough processing power; an unbalanced storage architecture is the problem.

Enter Storage and Networking Virtualization
As industry standard systems become virtualized environments, the industry is seeing system utilization and balance come to the forefront again. Virtualization technology takes advantage of excess processing, memory, storage and networking capability to create artificial environments; environments that offer important benefits.
While virtual processing technology is making more use of industry standard systems' excess capacity to create benefits, other forms of virtualization are stressing systems in unexpected ways.
Storage virtualization technology often uses system processing and memory to create benefits such as deduplication, compression, and highly available, replicated storage environments. Rather than to put this storage-focused processing load on the main systems, some suppliers push this work onto their own proprietary storage servers.
While this approach offers benefits, it also means that the data center becomes multiple islands of proprietary storage. It also can mean scaling up or down can be complicated or costly.
Another point is that many industry standard operating systems do their best to serialize I/O; that is, do one storage task at a time. This means that only a small amount of a system's processing capability is devoted to processing storage and networking requests, even if sufficient capacity exists to do more work.

Parallel I/O to the Rescue
If we look back to successful mainframe workloads, it's easy to see that the system architects made it possible to add storage and networking capability as needed. Multiple storage processors could be installed so that storage I/O could expand as needed to support the work. The same was true of network processors; many industry standard system designs have a great deal of processing power, but the software they're hosting doesn't assign excess capacity to storage or network tasks, due to the design of the operating systems.
DataCore has been working for quite some time on parallel storage processing technology that can utilize  excess processing capability without also creating islands of storage technology. When Lenovo came to DataCore with a new, highly-parallel hardware design and was looking for a way to make it perform well, DataCore's software technology came to mind. DataCore made it possible for Lenovo's systems to dynamically use their excess processing capacity to accelerate virtualized storage environments. The preliminary testing I've seen is very impressive and shows a significant reduction in cost, while also showing improved performance. I can hardly wait to see the benchmark results when they're audited and released.

Dan's Take: It's Time to Consider Parallel I/O
In my article "The Limitations of Appliance Servers," I pointed out that we've just about reached the end of deploying a special-purpose appliance for each and every function. The "herd-o'-servers" approach to computing has become too complex and too costly to manage. I would point to the emergence of "hyperconverged" systems in which functions are being brought back into the system as a case in point.
Virtual systems need virtual storage. Virtual storage needs access to processing, memory and networking capability to be effective. DataCore appears to have the technology to make this all work.

About the Author
Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. He has been a business unit manager at a hardware company and head of corporate marketing and strategy at a software company. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Monday 21 September 2015

ComputerWeekly: Bradford Grammar School graduates from Falconstor and Starwind to DataCore for software-defined storage

By Anthony Adshead: http://www.computerweekly.com/news/4500253781/Bradford-Grammar-School-graduates-to-DataCore-for-software-defined-storage
Bradford Grammar School has implemented DataCore storage software in front of DotHill arrays in a move that has seen it adopt an entirely software-defined storage environment to gain advanced functionality while cutting costs on expensive SAN hardware.
The deployment is the conclusion of a path that has seen it move from IBM storage hardware with Falconstor; and then Starwinds storage virtualisation products to DotHill arrays, completely managed by DataCore storage software.
...Bradford Grammar School deployed Falconstor seven years ago to gain replication functionality between IBM and DotHill arrays at the Bradford site. But Falconstor eventually proved expensive, as the school had to pay increased licence fees as capacity grew, said network manager Simon Thompson.
...From here the school moved to Starwind Virtual SAN software, which didn't charge according to storage capacity under its management. But after two years and a forced upgrade, it ran into problems that knocked outreplication and made data for virtual machines inaccessible, said Thompson.
...So, this year the school deployed DataCore SANsymphony version 10.1 on two Dell servers. These act as a software-defined storage front end, to two DotHill 3430 SANs with synchronous replication between them; and mirroring to a hosted disaster recovery site in the centre of Bradford.
Thompson said: “DataCore is doing clever stuff that DotHill can't do, or stuff that they can do but doing it in a better way. We can replicate data off-site in real time, which we couldn't do previously. We needed to replicate everything every 12 hours.
The use of automated storage tiering functionality in DataCore has seen the school deploy flash storage in the DotHill arrays. DataCore moves frequently used data to flash so it can be accessed rapidly.
Thompson said: “The benefits are that it works. It replicates, it mirrors and the contrast in performance is like night and day compared to before.”

Tuesday 8 September 2015

VMworld 2015: DataCore Unveils Revolutionary Parallel I/O Software; Proven Designs, "Less is More" Hyperconverged...

VMworld 2015 News Roundup – Slideshow of Top Stories
This week virtualization giant VMware (VMW) held its annual VMworld customer conference in San Francisco, and as always there was no shortage of virtualization-centric news from partner companies. Since reading pages and pages of press releases is no fun for anyone, we decided to compile some of the biggest announcements going on at this year’s show.

DataCore Unveils Parallel I/O Software
Software-defined storae vendor DataCore Software unveiled its new parallel I/O software at VMworld, which was designed to help users eliminate bottlenecks associated with running multi-core processing systems. The company also announced a new worldwide partnership with Curvature to provide users with a procurement and lifecycle model for their storage products, data services and centralized management.
Read more here.




Why Parallel I/O Software and Moore’s Law Enable Virtualization and Software-Defined Data Centers to Achieve Their Potential

VirtualizationReview - Hyperconvergence: Hype and Promise
The field is evolving as lower-cost options start to emerge.
…Plus, the latest innovation from DataCore -- something called Parallel I/O that I'll be writing about in greater detail in my next column -- promises to convert that Curvature gear (or any other hardware platform with which DataCore's SDS is used) into the fastest storage infrastructure  on the planet -- bar none. Whether this new technology from DataCore is used with new gear, used gear, or to build HCI appliances, it rocks. More later.

SiliconAngle: Back to basics: Why we need hardware-agnostic storage | #VMworld
In a world full of hyper this and flash that, George Teixeira, president and CEO of DataCore Software Corp., explained how going back to to the basics will improve enterprise-level storage solutions.
Teixeira and Dustin Fennell, VP and CIO of EPIC Management, LP, sat down with Dave Vellante on theCUBE from the SiliconANGLE Media team at VMworld 2015 to discuss the evolution of architecture and the need to move toward hardware-agnostic storage solutions.

VMworld the Cube: Video Interview on DataCore and Parallel I/O: https://www.youtube.com/watch?t=16&v=wH6Um_wUxZE

IT-Director on VMworld 2015: DataCore Unveils Revolutionary Parallel I/O Software
DataCore shows its hyper-converged 'less is more' architecture

DataCore Launches Proven Design reference Architecture Blueprints for Server Vendors
Lenovo, Dell, Cisco and HP:


Virtualization World: DataCore unveils 'revolutionary' parallel I/O software

More Tweets from the show:

Make any storage or Flash #VVOL compatible with our #Software-defined Storage Stack #SSD #virtualization #VMworld http://www.datacore.com 




Check out our latest pictures from the show and tweets live from VMworld at: https://twitter.com/datacore




#VMworld DataCore Parallel IO Software is the 'Killer App' for #virtualization & #Hyperconverged systems...stop by booth 835 pic.twitter.com/IbcTaTmfpv

Great to see the crowds at #VMworld learning more about DataCore’s Parallel IO, #VSAN, Hyperconverged & Software-defined Storage pic.twitter.com/chCZZ7H4x3

VMworld 2015: DataCore Unveils Revolutionary Parallel I/O Software
New Hyper-Converged Reference Architectures, Real World VMware User Case Studies and Virtual Server Performance Breakthroughs Also on Display



SAN FRANCISCO, CA August 31, 2015 DataCore Software, a leader in Software-Defined Storage, will use the backdrop of VMworld 2015 to show its hyper-converged ‘less is more’ architecture. Most significantly, VMware customers and partners will see first-hand DataCore’s adaptive parallel I/O harnessing today’s multi-core processing systems to eliminate the major bottleneck holding back the IT industry – I/O performance.
"It really is a perfect storm," said DataCore Chairman Ziya Aral. "The combination of ever-denser
multi-core processors with efficient CPU/memory designs and DataCore’s parallel I/O software create a new class of storage servers and hyper-converged systems that change the math of storage performance in our industry...and not by just a little bit. As we publish an ever-wider array of benchmarks and real-world performance results, the real impact of this storm will become clear."

At booth #835, DataCore’s staff of technical consultants will discuss the state-of-the-art techniques used to achieve much greater VM densities needed to respond to the demanding I/O needs of enterprise-class, tier-1 applications. DataCore will highlight performance optimizations for intense data processing and I/O workloads found in mainstream online transaction processing (OLTP) systems, real-time analytics, business intelligence and data warehouses. These breakthroughs have proven most valuable in the mission-critical lines of business applications based on Microsoft SQL Server, SAP and Oracle databases that are at the heart of every major enterprise.

Other announcements and innovations important to VMware customers and partners will also be featured by DataCore at VMworld. These include:
·         Hyper-converged software solutions for enterprise applications and high-end OLTP workloads utilizing DataCore™ Adaptive Parallel I/O software
·         New ‘Proven Design’ reference architectures for Lenovo, Dell, Huawei, Fujitsu and Cisco servers spanning high-end, midrange and smaller configurations
·         A new worldwide partnership with Curvature to provide users a novel procurement and lifecycle model for storage products, data services and centralized management that is cost-disruptive
·         Preview of DataCore’s upcoming VVOL capabilities
·         Stretch cluster capabilities ideal for splitting hyper-converged systems over metro distances

Breakout Sessions
·         DataCore and VMware customer case study featuring Mission Community Hospital: “Virtualizing an Application When the Vendor Says ‘No’” in the Software-Defined Data Center track -- Monday, August 31, 2015 at 12:00 p.m.

·         Lenovo Servers in Hyper-Converged and SAN Storage Roles” Learn how Lenovo servers are being used in place of traditional storage arrays to handle enterprise-class storage requirements in hyper-converged clusters as well as external SANs. Uncover the agility and cost savings you can realize simply by adding DataCore Software to Lenovo systems.  Two theater presentations  -- Tuesday, September 1, and Wednesday, September 2 at 3:30 p.m in the Lenovo Booth #1537
·         “Less is More with Hyper-Converged: When is 2>3” and “Efficiently Scaling Hyper-Converged: How to Avoid Buyers’ Remorse” Daily in the DataCore Booth #835

About VMworld
VMworld 2015 U.S. takes place at San Francisco’s Moscone Center from August 30 through September 3, 2015.  It is the industry's largest virtualization and cloud computing event, with more than 400 unique breakout sessions and labs, and more than 240 sponsors and exhibitors. To learn more about VMworld, please visit: www.vmworld.com 



DataCore Announces Proven Designs and Reference Architectures - HP, Cisco, Lenovo, Dell, Fujitsu, Huawei...

The New DataCore Proven Designs represent tested and field proven reference architectures. DataCore’s Software-Defined Storage (SDS) with preferred partner solutions. The DataCore Proven Design makes it simple for Customers and Partners to deploy Dell PowerEdge Servers and DataCore Software. Examples below:

Hyper-converged Solutions powered by DataCore's Software-Defined Storage platform and HP ProLiant Servers

Cisco and DataCore Proven Design

Hyper-converged Solutions powered by DataCore's Software-Defined Storage platform and Cisco UCS Servers

Lenovo and DataCore Proven Design

Hyper-converged Solutions powered by DataCore's SDS platform and Lenovo System x-Series Servers

Dell and DataCore Proven Design

Hyper-converged Solutions powered by DataCore's Software-Defined Storage platform and Dell PowerEdge Servers
For example the DataCore and Dell validated hyper-converged solutions are easy to setup, manage and scale for a wide variety of business workloads. The DataCore SDS platform, at the heart of the solution, integrates all storage, including hyper-converged, SAN and cloud storage to eliminate storage silos and future-proof your investment.



These Dell and DataCore reference architectures are optimized for performance with High-Speed Caching, Random Write Accelerator and Auto-Tiering; all utilized to increase I/O and decrease latency.






For the specific document see DataCore proven Design for Dell Power-Edge Servers or visit our Dell Partner Page:  http://datacore.com/Partners/Current-Partners/TechnologyPartners/dell