Translate

Tuesday 26 April 2016

Storage Magazine: The Bees Knees - UK Agri-Food Supply Chain Research and Analysis Organisation, Fera and DataCore Software

http://www.btc.co.uk/Articles/index.php?mag=Storage&page=compDetails&link=6546&cp=2

Agri-food specialists Fera Science Ltd was struggling to control Petabytes of data in its data centre, but has now achieved unified management; High Availability; reduced hardware TCO and vendor independence

Fera Science Limited is a national and international centre of excellence for interdisciplinary investigation and problem solving across plant and bee health, crop protection, sustainable agriculture, food and feed quality and chemical safety in the environment. Internally, Fera employees are predominantly scientists accessing information and readings in order to make recommendations for optimal yields. Externally, Fera provides services to 7,500 commercial Agri Food customers alongside UK governmental organisations including DEFRA.

Ben Jones, Data Centre Manager at Fera is responsible for securely holding and delivering the vast wealth of research data gleaned from ongoing field instruments and trials. From their Yorkshire based data centre, Ben provides constant availability and high performance for Petabytes of data and assures ongoing business application performance. Faced with large fluctuating data sets five years ago, Fera's IT team sought a solution that would both unify their legacy divergent hardware based estate and optimise struggling applications held on their Virtual Machines (VMs) to provide fast data mining.

Ben reflects, "Today Fera provides a large, single campus modern metro cluster split across 2 sites, for assured High Availability and we enjoy fast performance even in peak transaction times. Roll-back five years and the situation was far less clear with a large, fragmented estate that contained a mix of legacy devices, brands and technology. We needed an overlay layer that would unify and manage our assets maximising the investment that we had already made. We ultimately found this using software defined storage provided though DataCore's SANsymphony-V platform."

Five years ago, Fera's then beleaguered IT Team took the decision to go back to the drawing board to address the multiple problem areas whilst future-proofing the size of the data set that was known to be quadrupling every four years. Containing Fera's diverse and sprawling IT server and storage estate was the first pressing issue for the team, with multiple Dell/HP/NetApp/IBM standalone servers and hundreds of legacy Direct Attached and Network Attached Storage devices across the mirrored data centre.

Each year, IT had the onerous task of accurately anticipating the up-front storage requirements of departmental storage - or risk running out of space if storage had been inadvertently designated to another path. Performing maintenance, upgrades and critical updates across so many brands was also a huge overhead. Each time maintenance was performed, the mirrored device also had to be disabled, taken offline and then resumed. IT also wished to curtail the ongoing spiralling cost of network connections, multiples of which had been added in an effort to assist business continuity.

For the Fera user internally, the flailing IT infrastructure manifested in ongoing issues with speed of access to their applications, together with the inability to mine reports and record and access information as the maintenance window grew. At different times across the seasons, productivity levels dwindled further still as vast swathes of data arrived in unpredictable, colossal batches.
With these problems identified, the IT team sensed that their current environment was fast becoming unsustainable and sought alternate solutions that would not add to the hardware overhead. The clustered NetApp setup was becoming restrictive meaning that the VMware hosted application performance was becoming an ongoing bottleneck as applications competed for I/O. The team selected DataCore's software storage virtualisation platform - then known as SANmelody - to run on a pair of Dell 2U PowerEdge 2950s - and thereby centralising the VMware critical hosts to improve performance and allow live migration of VMs without downtime.

A couple of years later, Fera seamlessly upgraded and expanded their environment to DataCore's enterprise solution, SANsymphony-6.0, to provide parity across the mirror and to connect via fibre channel for speed. An upgrade to SANsymphony-V followed two years later, with DataCore's software platform now running on a pair of Dell PowerEdge R910s, with dual Fibre Channel Brocade switches each side of the mirror for ultimate resiliency and true High Availability.

Eighteen months on, all data is accessed through SANsymphony-V and it is the established backbone to Fera's ongoing IT infrastructure. Today Fera provides its members with a large, modern metro cluster split across 2 same-campus sites for high availability. Through DataCore, Fera offers twelve VMware VSphere 5.1 ESX hosts running 300 VMs offering essential business applications. Additionally, SANsymphony-V manages and protects 250 physical servers with all legacy NetApp storage served and front-ended by DataCore's SANsymphony-V.

All of Fera's storage pools and virtual disks are mirrored giving ongoing availability. Downtime is firmly an issue of the past as through SANsymphony-V, one side of the mirror stays functioning and fully available, while IT perform patches, Windows updates, backend fixes and essential critical updates to the other side.

Ben Jones explains "Essentially DataCore gives us the freedom of procurement across the estate. That's a powerful statement. We can now research and select which disk chassis and vendors are most suited to Fera and select those which we know will have the best controllers and offer the highest density. Using DataCore as the software layer, we have found ourselves being able to confidently migrate away from incumbent brands as management is now unified and assured. With it, we have significantly reduced the ongoing maintenance overhead."

DataCore's Auto-Tiering feature assists Fera to correctly allocate over 1 PB of data. Auto-Tiering is used extensively within the environment with a quadruple tiering policy allocating and automating data into the most appropriate class of storage. For Fera this entails critical user data being allocated to Tier 1 (SSDs); VMware specific and scientific data to Tier 2 (Fibre Channel SAS); general data apportioned to Tier 3 (nearline SATA disks); and lastly, archiving data (including the vast quantities of raw science data sets) allocated to Tier 4 (held on re-purposed SATA drives). This final re-purposed tier allows Fera (with no additional overhead) to be able to retrieve if required for compliance, data sets from over 5 years ago.

High Availability is now assured within Fera. This became increasingly important nine months ago when Fera formed a joint commercial partnership with DEFRA (Department of Environment, Farming & Rural Affairs) to provide 24x7 access to focussed research across both organisations.
DataCore's Thin Provisioning is another useful weapon in controlling future Fera storage overheads. Using SANsymphony-V, IT can now Thin Provision disk on-the-fly to Fera scientists with a few clicks, as and when required. Before, each employee would have a 2TB disk allocation and further provisioning had to be speculated based on historical usage.

DataCore's Continuous Data Protection (CDP) continues to assist the VMware platform with a planned and gradual migration away from NetApp. To assist this migration, Fera use DataCore's CDP feature to provide 24 hourly snapshots to recreate VM functionality. So, in the event of a failure occurring, Fera can roll back in time to the last hourly point. DataCore's inbuilt caching has also helped increase performance optimisation of the virtual environment and due to its inbuilt algorithms, VMware performance has been boosted by 15%.

Ben Jones concludes: "We are so confident in DataCore that we have based our entire environment upon it. Using DataCore, Fera now has a single management interface across our hundreds of disparate devices. It has become the cornerstone to our data centre."
More info: www.datacore.com 

Exertis offers DataCore Software-defined Storage solutions to resellers

CIO Review: DataCore and Exertis UK

Exertis announced the availability of DataCore's leading range of Software-Defined Storage (SDS) solutions. Resellers will gain access to DataCore SANsymphony-V and DataCore Virtual SAN through Exertis' enterprise division.

DataCore Expands in UK & Eire with New Distribution Partnership with Exertis and Expanded Sales Team

Brett Denly, Regional Director, DataCore Software commented "Partnering with Exertis allows us to target partners selling Fujitsu, Huawei, Dell and Lenovo solutions who would understand the massive performance gains that can now be achieved using DataCore and will allow us to address new resellers who are keen to harness the growth in the software defined storage market. Exertis will support the sale of DataCore storage solutions through its experienced and knowledgeable enterprise team."

DataCore's Software Defined Storage solutions enable enterprises and organisations of all sizes to benefit from end to end management of their server and storage estate through a virtualised software layer that sits above their hardware investment, simplifying the complexities of most traditional storage systems, dramatically enhancing performance and reducing the total cost of ownership. Gareth Bray, Head of Commercial – Enterprise said "DataCore will provide our resellers with unquestionable storage value and efficiencies by reducing storage costs, improving performance, simplifying storage management through a single user interface and providing ongoing business continuity including disaster recovery. SDS is an increasingly popular method for data management and we are delighted to be partnering with the established market leader."

DataCore's solutions are targeted towards the IT data centre that typically services 250 employees and above, although for smaller Hyper-V environments, DataCore offers an entry level, 2 simple 2 node hyper-converged solution, DataCore's Virtual SAN, to provide an easy to manage, low cost, highly performing entrance into hyper-convergence.

As SDS proof points, DataCore recently achieved a new world record, audited by the Storage Performance Council (SPC-1), for price performance with IOPs of 5 pence recorded on Lenovo’s x 3650 systems. On the same platform, the company recorded the fastest response times ever attained, even compared to the many all-flash arrays and multi-million brand systems that have published SPC-1 results.

Sunday 24 April 2016

DataCore Reports the Fastest Response Time and Best Price-Performance Among Top 10 SPC-1 Leaders

Parallel I/O Technology Drives More than 1.5M SPC-1 IOPS™ at a 100 Microsecond Response Time While Simultaneously Running Enterprise-class Database Workloads; Delivers SPC-1 Price-Performance™ of 9 Cents per SPC-1 IOPS™


DataCore announced that its second SPC-1 result has catapulted the company into third place among the SPC's Top 10 of absolute performers while achieving the best price-performance and fastest response times among those Top 10. DataCore again leapfrogged the field and now holds the top two positions in the SPC-1 Price-Performance™ category1. The DataCore™ Parallel Server software at the heart of the hyper-converged configuration delivered 1,510,090.52 SPC-1 IOPS™ 2. Notably, the number one and two systems3 in the category are very large footprint multimillion dollar systems that are 14 times more costly than the compact 4U-sized DataCore based solution.
"There is no magic in what we are doing," states Ziya Aral, Chairman of DataCore Software. "Yes, we use a standard 2U server but it is a server with 36 cores and 72 logical CPUs. At 2.5 GHz clock speed that multiplies out to the equivalent of 180 GHz, provided only that we use those CPUs concurrently. Even if the CPUs don't scale perfectly, we have an 'embarrassment of riches' in compute power. If they scaled at only 60% - and they do much better than that - we effectively have access to over 100 GHz of CPU power. Frankly, we would have been disappointed if we hadn't been able to put up these kinds of I/O numbers with a 100 GHz CPU."
DataCore's initial results showcasing the power of parallel I/O were first published in late 2015. The new results, which tripled the previous performance achievements, were attained on the same server platform hardware to demonstrate the potential and the pace of advancement possible from the company's new software and parallel I/O architecture. And, there is more to come.
Record-Breaking Performance
To illustrate the system's I/O power in demanding database environments, DataCore chose the Storage Performance Council's SPC-1 benchmark – the Gold Standard used by all major storage manufacturers to measure top end I/O performance, price-performance and response time. For the benchmark, DataCore used an off-the-shelf Intel-based Lenovo System x3650 M5 server.
The 1,510,090.52 SPC-1 IOPS™ were attained with the total cost for hardware, software and three years of support totaling$136,758.88. This yielded the SPC-1 Price-Performance™ result of $0.09 per SPC-1 IOPS™, which is more than eight times lower than all of the top performing high-end systems that have achieved over one million SPC-1 IOPS™ 4.
The DataCore Parallel Server configuration placed third overall in SPC-1 IOPS™ behind two systems costing over $2 million. Only the Huawei OceanStor 18800V3 at a total price of $2,370,760 and the Hitachi VSP G1000 system at $2,003,803 had higher SPC-1 IOPS™ numbers than the $136,759 solution from DataCore. Unlike those two storage systems which only provide external SAN functions, the DataCore Parallel Server also ran the computational enterprise-class database and OLTP workloads inside the same compact package.
Most remarkably, the DataCore configuration delivered the fastest SPC-1 response time ever recorded (100 Microseconds at 100% load), besting all systems, including multi-million dollar systems and all-flash arrays, by seven times or more. From a real estate standpoint, the entire system takes up only 4U (seven vertical inches for a 2U server and 2U for disks) of standard 19" rack space. In stark contrast, other systems reaching the million SPC-1 IOPS™ mark occupy multiple 42U cabinets consuming considerably more data center space, power, and cooling.
DataCore now holds the two top positions in the SPC-1 Price-Performance™ category5 (the previous DataCore™ SANsymphony™ system running on a hyper-converged configuration using a similar Lenovo System x server attained an SPC-1 Price-Performance™ record of $0.08/SPC-1 IOPS™ 6). "Essentially the only major difference between our first and second SPC-1 results was our software," notes Ziya Aral who continued by answering the obvious question - how is that possible? "The truth is that the hardware platform matters, multiprocessing matters, and I/O craft matters, but what matters most of all is software architecture. DataCore was designed from the outset for parallel architectures...but the definition of 'parallel' at the time was 4, 8, maybe 12 CPUs. Today, we are running in standard platforms with 72, 144 or even 288 logical CPU cores, and that will double with the next few ticks of the clock - because Moore's law now advances in multiples."
Aral explains further, "Parallel Server is designed to take advantage of that evolution in computer architectures - not just for the present but into the future. This software inverts our previous understanding: what was once a precious commodity now exists in surplus and the software must take advantage of it."
Tested Product: DataCore™ Parallel Server for Hyper-Converged and Server Systems
DataCore certified its results using DataCore Parallel Server software on a compact 2U Lenovo System x3650 M5 multi-core server featuring Intel® Xeon® E5-2600 v3 series processors with a mix of flash SSD and disk storage.
DataCore Parallel Server is a software product that transforms standard servers into parallel servers targeted for applications where extremely high IOPS and low latency are the primary requirements. DataCore's parallel I/O technology executes many independent I/O streams simultaneously across multiple CPU cores, significantly reducing the latency to service and process I/Os. This technology removes the serialized I/O limitations and bottlenecks that restrict the number of virtual machines (VMs), virtual desktops (VDI) and application workloads that can be consolidated on a server or a hyper-converged platform – and instead enables them to process far more work per server and significantly accelerate I/O-intensive applications.
DataCore Parallel Server software is now available to DataCore OEM partners and is currently being evaluated by server and system vendors. General availability is planned for Q2 2016.
Hyper-Consolidation and Next Generation Productivity with DataCore Parallel I/O Technology
The practical significance and business advantages of DataCore Parallel Server's record-breaking results can be appreciated from several perspectives:
  • Servers are the new storage: I/O-intensive workloads which had previously required enormous investments in exotic SAN hardware or enterprise-class external arrays can now be addressed with relatively inexpensive, compact, off-the-shelf hardware equipped with DataCore Parallel Server software.
  • One machine is simpler than many: Organizations no longer need to split I/O-intensive problems across hundreds of servers to reduce their dependency on exotic equipment. They can run these programs unaltered inside a few low-cost servers without undue complexity, delay and expense.
  • Hyper-consolidation versus server sprawl: Several years into virtualization initiatives, serial I/O processing inside servers remains singularly responsible for poor virtual machine densities. By putting multiple CPU cores to work on I/O, DataCore helps customers do the work of 10 servers on one or two.
DataCore's Parallel Server software enables industry-standard x86 servers to fully harness their untapped parallel computation power and gain the essential I/O functionality needed to drive today's demanding tier-1 business application requirements. In this way, companies benefit from dramatically higher productivity and huge server consolidation savings. To learn more visit: www.datacore.com/products/parallel-io.
About the Storage Performance Council
The Storage Performance Council (SPC) is a vendor-neutral standards body focused on the storage industry. The SPC created the first industry-standard performance benchmark targeted at the needs and concerns of the storage industry. From component level evaluation to the measurement of complete distributed storage systems, the SPC benchmark portfolio provides independently audited, rigorous and reliable measures of performance, price-performance and power consumption. For more information about the SPC and its benchmarks, please visit: http://www.StoragePerformance.org.

Wednesday 13 April 2016

TechValidate Research on DataCore Customers and Software-Defined Storage Confirms Performance, High Availabilty and Lower Cost of Ownership are the Primary Business Drivers

Data from Nearly 2,000 DataCore Customers Uncovers Impact of Software-Defined Flexibility on Acquisitions, Refreshes and Migrations; Productivity Gains from Increasing Performance Up to 10x and Reducing Total Cost of Ownership
DataCore has announced the results of a new research study conducted by TechValidate. The study primarily focused on the experience of DataCore customer’s in terms of performance, availability/reliability and total cost of ownership (TCO). Overall, participants reported faster applications with up to 10x performance increases; higher availability with a 90% or greater reduction in storage-related downtime; substantial reduction in costs; and greater productivity with the majority of respondents reporting a 50-90% decrease in time spent on routine tasks.
“We are so confident in DataCore that we have based our entire environment upon it. Using DataCore, Fera now has a single management interface across our hundreds of disparate devices. It has become the cornerstone to our data centre.”- Ben Jones, Data Centre Manager at Fera
Highlights from the findings include:
  • 47% of customers reported a 50% or more reduction in storage-related spending; over 80% of customers reported at least 25% savings.
  • The majority of customers reported that they were able to defer or skip multiple refresh cycles, and over 60% saved by deferring storage hardware acquisitions by using DataCore to extend the life and enhance the productivity of current investments.
  • 79% of customers reported improvements of at least 3x, and nearly half of the DataCore customers surveyed reported performance improvements between 5x-10x.
  • 60% reduced storage-related downtime with DataCore by 90% or more; the majority of customers who had systems deployed for two years or more reported no storage-related downtime whatsoever.
  • 72% of respondents reported a 50% or more decrease in time spent on managing routine storage tasks, with some noting a reduction as high as 90%.
  • All respondents reported a positive ROI with DataCore in the first year; 50% reported a positive ROI in six months or less.
These findings further support that DataCore offers the best performance and the lowest TCO in the industry by complementing new data recently published by the Storage Performance Council (SPC). In a series of recently released SPC-1 benchmarks, DataCore’s current SANsymphony and Hyper-converged Virtual SAN software achieved the industry’s best price-performance, coming in at just $0.08 cents per SPC-1 IOPS™. The results also measured incredibly fast response times of just 0.32 milliseconds [1] which were achieved while running the full load of the demanding enterprise-class application and database benchmark. At 0.32 milliseconds, the results are 3x-10x better than all other reported results including those from all-flash arrays and million dollar plus systems.
Stennis Space Center

To highlight the full impact of parallel I/O on performance, DataCore recently announced the results of new software that will enable servers to utilize multicores to multiply performance. The software, available in Q2, has demonstrated an incredible result of more than 1.5 million SPC-1 IOPS™ with a new world record response time of just 0.10 milliseconds at 100 percent load [2].
“This is game changer which is ahead of the trend and a key element to help organizations truly be able to have a software-defined data center,” DataCore customer Irvin Nio, IT Architect at Capgemini, noted during the survey.
The new level of enterprise-class high availability and reliability proved by TechValidate research can be attributed to DataCore’s features including hardware interoperability, hardware-independent storage services, data migration capabilities, and more. DataCore’s latest addition to its technology portfolio, parallel I/O, uniquely takes advantage of today’s advanced multi-core server platforms to execute many independent I/O streams simultaneously across multiple CPU cores – supporting the I/O needed to run more VMs and application workloads faster and at a much lower cost. It significantly reduces the latency to service and process I/Os while enabling companies to benefit from dramatically higher productivity and huge server consolidation savings.
A total of 1,984 responses were recorded from DataCore customers globally.TechValidate research data is sourced directly from verified business and technology professionals. The full findings of the study can be viewed at: https://www.techvalidate.com/product-research/datacore-sansymphony-v .
___________________

Tuesday 12 April 2016

"Parallel I/O is essentially like a multi-lane superhighway with “EZ pass” on all the lanes.

How Can the Software-Defined Data Center Reach its True Potential?

DCK
"Parallel I/O is essentially like a multi-lane superhighway with “EZ pass” on all the lanes. It avoids the bottleneck of waiting on a single toll booth and the wait time. It opens up the other cores (all the “lanes” in this analogy) for I/O distribution so that data can continue to flow back and forth between the application and the storage media at top speed."
By George Teixeira, President and CEO, DataCore Software
In the software-defined data center (SDDC), all elements of the infrastructure such as networking, compute, servers and storage, are virtualized and delivered as a service. Virtualization at the server and storage level are critical components on the journey to a SDDC since they enable greater productivity through software automation and agility while shielding users from the underlying complexity of the hardware.
Today, applications are driving the enterprise – and these demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. The problem is that in a world that requires near instant response times and increasingly faster access to business-critical data, the needs of tier 1 enterprise applications such as SQL, Oracle and SAP databases have been largely unmet. For most data centers the number one cause of these delays is the data storage infrastructure.
Why? The major bottleneck has been I/O performance. Despite the fact that most commodity servers already cost-effectively provide a wealth of powerful multiprocessor capabilities, most sit parked and in idle mode, unexploited. This is because current systems still rely on device-level optimizations tied to specific disk and flash technologies that don’t have the software intelligence that can fully harness these more powerful server system technologies with multicore architectures.
While the virtual server revolution became the “killer app” that exploited CPU utilization and to some degree the multicore capabilities, the downside is that virtualization and the move to greater server consolidation created a workload blender effect in which more and more of the application I/O workloads were concentrated and had to be scheduled on the same system. All of those VMs and their applications become easily bottlenecked going through a serialized “I/O straw.” As processors and memory have dramatically increased in speed, this I/O straw continues to bottleneck performance — especially when it comes to the critical business applications driving databases and on-line transaction workloads.
Many have tried to address the performance problem at the device level by adding solid-state storage (flash) to meet the increasing demands of enterprise applications or by hard-wiring these fast devices to virtual machines (VMs) in hyper-converged systems. However, improving the performance of the storage media—which replacing spinning disks with flash attempts to do—only addresses one aspect of the I/O stack. Hard-wiring flash to VMs also seems to be a contradiction to the concept of virtualization in which technology is elevated to a software-defined level above the hard-wired and physical aware level, and it also adds complexity and vendor specific lock-ins between the hypervisor and device levels.
Multi-core processors are up to the challenge. The primary element that is missing is software that can take advantage of the multicore/parallel processing infrastructure. Parallel I/O technology enables the I/O processing to be done separately from computation and in parallel to improve I/O performance by building on virtualization’s ability to decouple software advances from hardware innovations. This method uses software to drive parallel I/O across all of those CPU cores.
Parallel I/O technology can schedule I/O from virtualization and application workloads effectively across readily available multicore server platforms. It can overcome the I/O bottleneck by harnessing the power of multicores to dramatically increase productivity, consolidate more workloads and reduce inefficient server sprawl. This will allow much greater cost savings and productivity by taking consolidation to the next level and allowing systems to do far more with less.
640px-EZPass_logo.svg
Parallel I/O is essentially like a multi-lane superhighway with “EZ pass” on all the lanes. It avoids the bottleneck of waiting on a single toll booth and the wait time. It opens up the other cores (all the “lanes” in this analogy) for I/O distribution so that data can continue to flow back and forth between the application and the storage media at top speed.
The effect is that more data flows through the same hardware infrastructure in the same amount of time as legacy storage systems. The traditional three-tier infrastructure of servers, network, and compute benefits by having storage systems that directly respond and service existing I/O requests faster and thus have the capability of supporting significantly more applications and workloads on the same platforms. The efficiency of a low-latent parallel architecture is potentially more critical in hyper-converged architectures, which are a “shared-everything” infrastructure. If the storage software is more efficient in its use of computing resources, that means that it returns more available processing power to the other processes on which it runs.
By taking full advantage of the processing power offered by multicore servers, parallel I/O technology acts as a key enabler for a true software-defined data center. This is due to the fact that it avoids any special hardwiring that impedes achieving the benefits of virtualization while it unlocks the underlying hardware power to achieve a dramatic acceleration in I/O and storage performance – solving the I/O bottleneck problem and making the realization of software-defined data centers possible.

Saturday 9 April 2016

ESG’s Senior Lab Analyst shares his hands-on experiences with DataCore Hyper-converged Virtual SAN software and SANsymphony Software-defined Storage platform

ESGlab
ESG’s Senior Lab Analyst, Tony Palmer, shares his hands-on experiences with DataCore™ Hyper-converged Virtual SAN software and SANsymphony™ Software-defined Storage platform.  See how the products fared under a comprehensive battery of tests, exercising many of their enterprise-class features. Learn why these capabilities matter, especially to IT organizations tasked with non-stop operations, latency-sensitive workloads and cost-reduction mandates.

Get a glimpse for self-provisioning storage with the desired SLAs during virtual machine creation thanks to DataCore’s deep integration with VMware VVols. Observe the performance acceleration and savings that cross-array auto-tiering brings. And witness active-active, high-availability in action for failover clusters.

Tony also puts into perspective the significance of DataCore Parallel I/O technology – key to the company’s record-shattering results for I/O response and price-performance under heavy transactional database processing.

Download the complete Lab Validation Report.