Translate

Monday, 9 March 2015

Hyper-Converged and Software-defined Storage, Why they go together


Sushant Rao of DataCore Software details real-world scenarios of how a hyper-converged infrastructure is the right approach to combine storage, compute, networking and virtualisation in one unit
One of the fundamental requirements for virtualising applications is the underlying shared storage. Applications can move around to different servers as long as those servers have access to the storage with the application and its data. Typically, shared storage takes place over a Storage Area Network (SAN). However, SANs typically have issues in virtualised environments. The first is providing consistent, reliable I/O performance where it is needed. As different applications start, stop and process data, the load on the SAN varies greatly. If a database starts a large job processing data, the SAN may become overwhelmed, which will start impacting the performance of other applications that are acting in a normal state.

Applications that are performance-sensitive are particularly susceptible to this issue, including databases (Oracle, Microsoft SQL Server); applications and ERP systems based on databases (SAP, Oracle Applications, Microsoft SharePoint, Microsoft Dynamics); VDI (VMware and Citrix); and communications systems (Microsoft Exchange, VoIP). In addition, as the number of applications in the environment grows, IT needs to be able to scale out infrastructure seamlessly and quickly. Any time maintenance is done on a SAN, the storage needs to go offline, leading to a disruption. Another issue, especially in smaller environments such as remote sites, is the reliability and complexity of SANs. When remote or regional locations (retail shops, bank branches, manufacturing plants, call centres, distribution centres, surgeries, etc.) have applications on-site, IT needs to address issues with availability and management of the infrastructure. In the simplest case, an office has two servers to ensure high availability at the compute layer.

However, the servers are connected to a SAN (typically a low-end storage array and network connections), which itself is a single point of failure. If the SAN goes offline for any reason it doesn't matter that there are two servers; the applications have an outage, which disrupts the business. Usually there are no IT staff on-site, so simplicity of management and reducing complexity are very important. Due to the challenges of using SANs in a virtual environment, organisations are currently looking for new options. Hyper-converged infrastructure is a solution that seems well-suited to address these issues.

WHY HYPER-CONVERGED?
To provide consistent high-performance, IT can create application-specific clusters. By running the same type of application on the cluster (e.g. databases), IT is able to manage performance and identify/resolve bottlenecks more effectively. In addition, to avoid the performance limitations of a SAN, hyper-converged storage utilises Direct-Attached Storage (DAS) within servers as shared storage, moving data closer to the applications. This architecture provides better I/O performance closer to the application (therefore creating better response times), resulting in less complexity and lower cost.

RELIABLE APPLICATION PERFORMANCE
A hospital recently used DataCore Software's Virtual SAN software to create a hyper-converged system to achieve better and more consistent application performance. The hospital had 12 physical servers running its PBX system. The organisation wanted to virtualise this application (into 12 VMs) but it was essential to provide the same level of reliable performance as the physical servers (since voice communication is vital in a hospital environment). The hospital knew it wanted a dedicated cluster for the virtualised PBX application. But, its IT staff were not satisfied with available options, such as VMware Virtual SAN, which required a minimum of three physical servers (and later, they learned that actually four servers were recommended).

The consensus was that utilising three servers to run 12 VMs was wasteful and unnecessarily expensive. Instead, the hospital chose DataCore's Virtual SAN software. This solution only required two servers for failover, which reduced costs by 33% from the onset. In addition, DataCore Virtual SAN uses adaptive RAM caching to accelerate I/O. RAM is generally 10x faster than Flash storage, so the performance of the virtualised PBX was "through the roof." In addition, the RAM caching meant that Flash storage was optional, further reducing costs for the hospital.

The last consideration was the ability to scale with a converged architecture, compute and memory scale with storage capacity. If more storage capacity is needed, but additional compute / memory is not, then the options are less than desirable. IT can either change the drives in the servers to offer higher capacity or add another server. However DataCore Virtual SAN, with its Integrated Storage Architecture, can utilise a central SAN to complement the direct-attached storage inside the servers. This means that additional storage capacity is made available from the central SAN and data resides on the tier that best matches its performance requirement. For example, "hot" data remains close to the server tier and "cold" data remains on the SAN. This option provided the hospital with the ability to optimise application performance and add greater flexibility to scale storage and compute/memory as needed.

REGIONAL SUPPORT
For regional sites that need to run a mixture of workloads through a highly available infrastructure, the logical solution is to turn to the local storage in the servers into redundant shared storage, thereby increasing availability. In addition, reducing the amount of hardware needed for availability reduces the physical footprint of the infrastructure (which may be limited in a remote branch) as well as the costs. Lastly, by combining compute, network and storage into one infrastructure, the complexity of managing separate pieces is removed.

SDS BRINGS IT ALL TOGETHER
There is a downside to hyper-converged storage. Each deployment becomes a separate storage system to manage and maintain. To ensure that yet another, separate data island isn't created with hyper-converged infrastructure, it needs to be integrated into the overall storage infrastructure and management. This is where DataCore's Software-defined Storage platform comes in.

By augmenting hyper-converged infrastructure with the capacity advantages and investments made in existing SANs, DataCore can scale storage capacity and performance easily and efficiently. More importantly, the DataCore SDS platform unifies all of the storage systems from different vendors and provides one set of comprehensive storage services across the entire storage infrastructure - under a single pane of management - so it is easy to administer the storage infrastructure and unify separate data islands.
More info: www.datacore.comwww.datacore.com

"To provide consistent high-performance, IT can create application-specific clusters. By running the same type of application on the cluster (e.g. databases), IT is able to manage performance and identify/resolve bottlenecks more effectively. In addition, to avoid the performance limitations of a SAN, hyper-converged storage utilises Direct-Attached Storage (DAS) within servers as shared storage, moving data closer to the applications. This architecture provides better I/O performance closer to the application (therefore creating better response times), resulting in less complexity and lower cost."



No comments: