Tuesday 15 November 2016

The magic of DataCore Parallel I/O Technology

DataCore Parallel I/O technology seems like a kind of magic, and too good to be true… but you only need to try it once to understand that it is real and has the potential to save you loads of money!
Benchmarks Vs Real world
Frankly, I was skeptical at first and I totally underestimated this technology. The benchmark posted a while ago was incredibly good (too good to be true?!). And even though this one wasn’t false, sometimes you can just work around some limits of the benchmarking suite and build specific and unrealistic configurations to get numbers that look very good, but that are hard to reproduce in real world scenarios.

When I was briefed by DataCore they convinced me not with sterile benchmarks, but with real workload testing! In fact, I was particularly impressed by a set of amazing demos I had the chance to watch where a Windows database server, equipped with Parallel I/O Technology, was able to process data dozens of times faster than the same server without DataCore’s software… and the same happened with a cloud VM instance (which is theoretically the same, since this is a software technology, but is much more important than you think… especially if you look at how much money you could save by adopting it).
Yes, dozens of times faster!
I know it seems ridiculous, but it isn’t. DataCore Parallel Server is a very simple piece of software that changes the way IO operations are performed. It takes advantage of the large number of CPU cores and RAM available on a server and allows to organize all the IOs in a parallel fashion, instead of serial, allowing to achieve microsecond level latency and, consequently, a very large number of IOPs. 

This kind of performance allows to build smaller clusters or get results much faster with the same amount of nodes… and without changing the software stack or adding expensive in-memory options to your DB. It is ideal for Big Data Analytics use cases, but there are also other scenarios where this technology can be of great benefit!
Just software
I don’t want to downplay DataCore’s work by saying “just software”, quite the contrary indeed! The fact that we are talking about a relatively simple piece of software makes it applicable not only to your physical server but also to a VM or, better, a cloud VM.
If you look at cloud VM prices, you’ll realise that it is much better to run a job in a small set of large-CPU large-memory VMs than in a large amount of SSD-based VMs for example… and this simply means that you can spend less to do more, faster. And, again, when it comes to Big Data Analytics this is a great result, isn’t it?
Closing the circle
DataCore is one of those companies that has been successful and profitable for years. Last year, with the introduction of Parallel I/O they demonstrated their capability of still being able to innovate and bring value to their customers. Now, thanks to an evolution of Parallel I/O, they are entering in a totally new market, with a solution that can easily enable end users to save loads of money and get faster results. It’s not magic of course, just a much better way to use the resources available in modern servers.

Parallel Server is perfect for Big Data Analytics, makes it available to a larger audience, and I’m sure we will see other interesting use cases for this solution over time…

Monday 14 November 2016

DataCore Hyperconverged Virtual SAN Speeds Up 9-1-1 Dispatch Response

Critical Microsoft SQL Server-based Application Runs 20X Faster

"Response times are faster. The 200 millisecond latency has gone away now with DataCore running," stated ESCO IT Manager Corey Nelson. "In fact, we are down to under five milliseconds as far as application response times at peak load. Under normal load, the response times are currently under one millisecond."

DataCore Software announced that Emergency Communications of Southern Oregon (ECSO) has significantly increased performance and reduced storage-related downtime with the DataCore Hyper-converged Virtual SAN.

Located in Medford Oregon, ECSO is a combined emergency dispatch facility and Public Safety Answering Point (PSAP) for the 9-1-1 lines in Jackson County, Oregon. ECSO wanted to replace its existing storage solution because its dispatch application, based on Microsoft SQL Server, was experiencing latencies of 200 milliseconds at multiple times throughout the day - impacting how fast fire and police could respond to an emergency. In addition to improving response time, ECSO wanted a new solution that could meet other key requirements, including higher availability, remote replication, and an overall more robust storage infrastructure.  

After considering various hyper-converged solutions, ECSO IT Manager Corey Nelson decided that the DataCore Hyper-converged Virtual SAN was the only one that could meet all of his technology and business objectives. DataCore Hyper-converged Virtual SAN enables users to put the internal storage capacity of their servers to work as a shared resource while also serving as integrated storage architecture. Now ECSO runs DataCore Hyper-converged Virtual SAN on a single tier of infrastructure, combining storage and compute on the same clustered servers.

Performance Surges with DataCore
Prior to DataCore, performance -- specifically, latency -- was a problem at ECSO, due to the organization's prior disk array that took 200 milliseconds on average to respond. DataCore has solved the performance issues and fixed the real-time replication issues that ECSO was previously encountering because its Hyper-converged Virtual SAN speeds up response and throughput with its innovative Parallel I/O technology in combination with high-speed caching to keep the data close to the applications.
ECSO's critical 9-1-1 dispatch application must interact nearly instantly with the SQL Server-based database. Therefore, during the evaluation and testing period, understanding response times were vital criteria. To test this, Nelson ran a SQL Server benchmark against his current environment as well as the DataCore solution. The benchmark used a variety of block sizes as well as a mix of random/sequential and read/write to measure the performance. The results were definitive -- the DataCore Hyper-converged Virtual SAN solution was 20X faster than the current environment.

"Response times are faster. The 200 millisecond latency has gone away now with DataCore running," stated Nelson. "In fact, we are down to under five milliseconds as far as application response times at peak load. Under normal load, the response times are currently under one millisecond."

Unsurpassed Management, Performance and Efficiency
Before DataCore, storage-related tasks were labor intensive at ECSO. Nelson was accessing and reviewing documentation continuously to ensure that any essential step concerning storage administration was not overlooked. He knew that if he purchased a traditional storage SAN, it would be yet another point to manage.
"I wanted as few ‘panes of glass' to manage as possible," noted Nelson. "Adding yet another storage management solution to manage would just add unnecessary complexity."
The DataCore hyper-converged solution was exactly what Nelson was looking for. DataCore has streamlined the storage management process by automating it and enabling IT to gain visibility to overall health and behavior of the storage infrastructure from a central console.

"DataCore has radically improved the efficiency, performance and availability of our storage infrastructure," he said. "I was in the process of purchasing new hosts, and DataCore Hyper-converged Virtual SAN fit perfectly into the budget and plan. This is a very unique product that can be tested in anyone's environment without purchasing additional hardware." 

To see the full case study on Emergency Communications of Southern Oregon, click here.

Wednesday 2 November 2016

Performance, Availability and Agility for SQL Server

Introduction - Part 1

IT organizations must maintain service level agreements (SLAs) by meeting the demands of applications that work on a SQL Server. To meet these requirements they must deliver superior performance and continuous uptime of each SQL Server instance.  Furthermore, application dependent on SQL Server, such as agile development, CRM, BI, or IOT,  are increasingly dynamic and require faster adaptability to performance and high availability challenges than device level provisioning, analytics and management can provide. 
In this blog, which is the first of a 3 part series, we will discuss the challenges IT organizations face with SQL server and solution that helps them overcome these challenges.
All these concerns can be associated to a common root cause, which is the storage infrastructure. Did you know that 62% of DBAs experience latency of more than 10 milliseconds when writing to disks1? Not only does this slowdown impact the user experience, but also has DBAs spending hours tuning the database. Now that is the impact of storage on SQL Server performance; so what about its impact on availability? Well, according to surveys, 50% of organizations don’t have an adequate business continuity plan because of expensive storage solution2. When it comes to agility, DBAs have agility on at the SQL Server level, but IT administrators don’t have the same agility on the storage side – especially when they have to depend on heterogeneous disk arrays. Surveys shows that a majority of enterprises have 2 or more types of storage and 73% have more than 4 types3
A common IT trend to solve the performance issue is it to adopt flash storage4. However, moving the entire database to flash storage significantly increases cost. To save on cost, the DBAs end up with the burden of having to pick and choose the instances that require high-performance. The other option to overcome the performance issue is to tune the database and change the query. This requires significant database expertise, demands time, and changes to the production database. Most organizations either don’t have dedicated database performance tuning experts or the luxury of time or are sensitive to making changes to the production database. This common dilemma makes the option of tuning the database a very farfetched approach.
For higher uptime, DBAs utilize Cluster Failover Instance (formerly Microsoft Cluster Service) for server availability, but clustering alone cannot overcome storage-related downtime. One option is to upgrade to SQL Server Enterprise, but it puts a heavy cost burden on the organization (Figure 1.) This leaves them with the option to either not upgrading to SQL Server Enterprise or choosing only few SQL Server instances to be upgraded to the SQL Server Enterprise. The other option is to use storage or 3rd-party mirroring, but neither solution guarantee a Recovery Point Objective (RPO) & Recovery Time Objective (RTO) of zero.
Figure 1
DataCore’s advanced software-defined storage solution addresses both the latency and uptime challenges of SQL Server environments. It is easy to use, delivers high-performance and offers continuous storage availability. DataCore™ Parallel I/O and high-speed ‘in-memory’ caching technologies increases productivity by dramatically reducing the SQL Server query times.

Next blog
In the next blog, we will touch more on the performance aspect of DataCore.