What’s New in Windows 8 for Hyper-V Based Cloud Computing (Part 5) – Hyper-V Storage Scenarios

If you would like to read the other parts in this article series please go to:

Introduction

This article is the 5th of an 11 part series that provides a comprehensive look at the new features in Windows Server 8 and Windows 8 client that support virtualization and cloud computing. In this article, you learn about new storage scenarios in Windows Server 8 that Hyper-V can leverage to provide highly scalable and highly available cloud environments.

Hyper-V Storage Scenarios

When building a cloud infrastructure, there are two main storage models to consider for deployment. In the first model, the cloud compute and storage resources are tightly bound, which means that the Hyper-V hosts have directly connected storage devices such as an internal SATA array or an external SAS array. The second model consists of cloud compute and storage resources that are loosely bound, that is to say that compute and storage devices can be independently scaled, and that storage devices are connected through some type of network fabric (e.g., iSCSI or fibre channel SAN), or as is now possible with Windows Server 8 through a file server connection. The type of storage architecture that is required depends on many factors including cloud size deployment, cloud complexity requirements (Live Migration, Storage Live Migration, and so on), cloud performance requirements, service-level requirements, budget, and IT staff skill set.

Traditional Enterprise Storage Deployment

For initial virtualization deployments such as server consolidation efforts which are mainstream today using Windows Server 2008 R2, Hyper-V hosts are often connected to new or existing SANs through iSCSI or using one or more fibre channel HBAs. The number of network adapters configured in the host for iSCSI SAN connections are based on the number of virtual machines and the bandwidth required to support each workload. Network adapters are dedicated to the iSCSI traffic, and not shared to transport host-based traffic. For single virtual host deployments, virtual machine files can be stored on individual volumes on the SAN.

If high-availability and dynamic resource management are required, Hyper-V hosts are clustered with up to sixteen cluster nodes. For environments with high SLAs, multiple iSCSI network cards or fibre channel HBAs (and redundant switching fabric components) are used to offer multiple storage paths. Virtual machines are stored using CSV allowing all cluster nodes to have simultaneous access to the file system, and simplifying the movement of workloads between cluster nodes using features such as Live Migration.

Optimized Storage Deployment

With Windows Server 8, you can implement traditional storage deployments, but you also have the ability to leverage new configurations that allow you to inject a higher degree of availability, scalability, and performance as you build up your private cloud infrastructure. For example, with the emergence of 10 Gb Ethernet NICs, you can address storage access bottlenecks (and even lower storage infrastructure complexity and cost) by considering a move from fibre channel to iSCSI over standard Ethernet. In this configuration, you can either dedicate the 10 Gb network adapter to only support storage access for demanding workloads such as SQL Server, Exchange Server or other LOB applications, or if there is available bandwidth, you can consolidate the storage traffic flow with other traffic streams originating from Live Migration, Failover Clustering, and host management functions. This notion of consolidating various host traffic streams away from dedicated 1 Gb NICs to 10 Gb NICs is referred to as a converged network, a term that Microsoft uses in the context of achieving high performance continuous availability in the private cloud with the ability to standardize on Ethernet for data transport, reducing the cost point for deployment in small and medium environment where the ability to deploy fibre channel may be cost prohibitive or IT skills sets are lacking to support the complexity of the deployment.

Along with leveraging standard Ethernet technology with 10 Gb NICs to transport the data streams, you can also make use of advanced networking features of Windows Server 8 to implement bandwidth control and to enhance storage access performance. For example, because Windows Server 8 supports Datacenter Bridging (DCB), or hardware-level quality of service (QOS), you can implement bandwidth reservations for your storage access and other network traffic. DCB allows optimization of the network bandwidth by making bandwidth available if the reserved capacity is not in use. In tandem with DCB, you can also use Receive Side Scaling (RSS) to enhance iSCSI-based storage access performance. Without RSS support, all received network packets are processed by a single CPU (identified as CPU 0) regardless of the number of CPUs that are present in the physical host. If CPU 0 reaches 100 % utilization, then it cannot process any more data and, therefore, it can create a bottleneck under high network loads. With RSS, higher levels of performance are attained by allowing the processing of received network packets to scale with the number of processors available in the physical host. By using IP and TCP source and destination addresses with a hash, the processing of the received network packets is spread across the available CPUs.Isolation of storage network traffic from other data streams is also possible using VLANs.

File Server Storage Deployment

With Windows Server 8, Microsoft provides the ability to abstract storage systems from virtualization hosts by connecting the hosts to file servers that manage the connections to storage devices. This configuration offers the benefits of storage access across a standard Ethernet network and support for the convergence of the host storage traffic with other Hyper-V host network traffic loads. One or more 10 Gb NICs can be used to connect the Hyper-V hosts to the Windows file servers, and DCB and RSS can be used to moderate bandwidth usage and provide network packet receive processing, respectively. In addition, because Windows Server 8 file servers support a wide range of storage devices (from JBODs managed through Storage Spaces to traditional SANs), they allow more flexible and cost-targeted storage configurations to be deployed while achieving a high level of scalability and performance. File server scalability and performance is guaranteed by the new SMB 2.2 Multi-Channel protocol which supports multiple TCP or Remote Direct Memory Access (RDMA) connections over one or more physical network interfaces. SMB 2.2 offers resiliency by allowing client and server components to fully recover from network connection faults and server failures. RDMA provides a secure transfer of network buffers between two machines across the network. RMDA supports high-speed data transfer with low latency and minimized processor utilization. As is the case for DCB and RSS, NICs must be RDMA-capable. Windows Server 8 supports standard based RDMA networks such as iWARP (RDMA over TCP/IP), ROCE (RDMA Over Converged Ethernet), as well as InfiniBand based protocols.

Abstracting storage connectivity through file server connections has the added benefit of enabling the live migration of virtual machines across Hyper-V clusters. Connectivity from clustered Hyper-V hosts to clustered file servers is supported to implement continuously available configurations that are also highly scalable as well as high performance.

It is also possible to use Windows Server 8 NIC teaming to implement load balancing or failover connections between Hyper-V hosts and file servers. However, NIC teaming and RDMA support are mutually exclusive, and so the choice of configuration must be made based on the basis of availability and performance requirements.

Conclusion

In this article, you learned about specific storage scenarios that support continuously available, scalable, and high performance cloud deployments. These storage scenarios are enabled through a combination of new technology such as high performance 10 Gb NICs along with support in Windows Server 8 and Hyper-V for DCB, RSS, RDMA, and other performance optimizing features. In Part 6 of this series, you will learn about new high-availability (or continuous availability) features offered in Windows Server 8 and supported by Hyper-V.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top