If you would like to read the other parts in this article series please go to:
• Hyper-V optimization tips (Part 1): Disk caching
• Hyper-V optimization tips (Part 2): Storage bottlenecks
• Hyper-V optimization tips (Part 3): Storage queue depth
• Hyper-V optimization tips (Part 4): Clustered SQL Server workloads
• Hyper-V optimization tips (Part 5): Power management
In the earlier articles of this series we’ve looked at various aspects of optimizing Hyper-V performance including disk caching settings on Hyper-V hosts and virtual machines, storage bottlenecks on host clusters, storage queue depth, clustered SQL Server workloads, and power management settings. In this article we’re going to examine some problems relating to the monitoring of network performance for Hyper-V hosts and host clusters in enterprise environments.
Review of basic concepts
Before we dig deeper into this topic we’ll start off by reviewing a few basic networking concepts that are typically relevant when considering Hyper-V hosts deployed in enterprise environments. These concepts include Quality of Service (QoS), Data Center Bridging (DCB), Offloaded Data Transfer (ODX), Windows NIC Teaming, and Converged Network Adapters (CNAs). For more information about some of these topics, you can refer to two of my free Microsoft Press ebooks Introducing Windows Server 2012 and Introducing Windows Server 2012 R2 which are available for download in PDF format from those two links. And for hands-on learning, you may like to get my book “Training Guide Installing and Configuring Windows Server 2012 R2 (MCSA)” which is available for purchase from the Microsoft Press Store.
Quality of service (QoS) – This refers generally to any technologies used for managing network traffic in ways that can meet SLAs and/or enhance user experiences in a cost-effective manner. By using QoS to prioritize different types of network traffic, you can ensure that mission-critical applications and services are delivered according to SLAs and to optimize user productivity. Hyper-V in Windows Server 2012 lets you specify upper and lower bounds for network bandwidth used by VMs. Windows Server 2012 R2 also adds Storage Quality of Service (QoS) a new feature of file-based storage that is enabled at the VHDX layer and allows you to limit the maximum IOPS allowed to a virtual disk on a Hyper-V host. It can also allow you to set triggers to send notifications when a specified minimum IOPS is not met for a virtual disk.
Data Center Bridging (DCB) – This an IEEE standard that allows for hardware-based bandwidth allocation for specific types of network traffic. This means that DCB is yet another QoS technology. DCB-capable network adapter hardware can be useful in cloud environments where it can enable storage, data management, and other kinds of traffic all to be carried on the same underlying physical network in a way that guarantees each type of traffic its fair share of bandwidth. Windows Server 2012 supports DCB provided that you have both DCB-capable Ethernet NICs and DCB-capable Ethernet switches on your network.
Offloaded Data Transfer (ODX) -This functionality in Windows Server 2012 enables ODX-capable storage arrays to bypass the host computer and directly transfer data within or between compatible storage devices. The result is to minimize latency, maximize array throughput, and reduce resource usage, such as CPU and network consumption on the host computer. For example, by using ODX-capable storage arrays accessed via iSCSI, Fibre Channel, or SMB 3.0 file shares, virtual machines stored in the array can be imported and exported much more rapidly than they could without ODX capability being present.
Windows NIC Teaming – This feature which is also known as load balancing and failover (LBFO) enables multiple network interface cards (NICs) on a server to be grouped together into a team. The purposes of this are to help ensure availability by providing traffic failover in the event of a network component failure and to enable aggregation of network bandwidth across multiple NICs. Prior to Windows Server 2012 implementing NIC teaming required using third-party solutions from independent hardware vendors (IHVs), but it’s now an in-box solution that works across different NIC hardware types and manufacturers. For a detailed examination of this feature see my series of articles titled Windows NIC Teaming using PowerShell on WindowsNetworking.com.
Converged Network Adapters (CNAs) – This refers to networking hardware that combines Ethernet networking with Fibre Channel storage connectivity over Ethernet. The goal of this combination is to reduce the cost and space used for hardware, particularly in datacenter and cloud environments where blade servers are being used. For a good explanation of CNAs and their usage, see this blog post by Christian Edwards which is part of a series of posts he did several years ago explaining various aspects of Hyper-V networking architectures.
Issues with monitoring Hyper-V networking
Monitoring network important is important in real-world environments because your network and/or network connectivity can become a bottleneck that can negatively impact how your virtualized workloads (applications and services) respond and perform. From talking with some of my colleagues who work in the field with customers that have Hyper-V hosts and host clusters deployed in enterprise environments, there are a number of different issues that may arise which can make it difficult to monitoring Hyper-V networking performance.
For example, while Microsoft’s built-in Performance Monitor (perfmon.exe) is generally the go-to tool for capturing and analyzing network traffic to help you identify possible bottlenecks, there are certain common real-world situations where Perfmon can fall short. One of these scenarios is when you have a DCB converged fabric solution involving a Fiber Channel (FC) storage area network (SAN) that is being used for Hyper-V storage. If you are using third-party DCB-capable Ethernet network adapters and DCB-capable switches and have the Windows Server 2012 Data Center Bridging feature installed (PowerShell command: Install-WindowsFeature Data-Center-Bridging) then you should be aware that Perfmon has no awareness about DCB and is unable to monitor the network traffic of the different traffic classes involved in such scenarios.
It’s also important to be aware that Windows NIC Teaming has no awareness or understanding of DCB on your host. As a result of this, if you create any NIC teams on your host that use DCB-enabled NICs as members of the team, the result may be poorer network performance than you may have anticipated from this arrangement. You also cannot configure the DCB feature of Windows Server on NICs that are used for Hyper-V since enabling DCB doesn’t affect any NICs that are bound to the virtual switch on the host. The situation becomes even more complicated when CNA cards are being used as we will see in the next article in this series.
Got more questions about Hyper-V?
If you have any questions about Microsoft’s Hyper-V virtualization platform, the best place to ask them is the Hyper-V forum on Microsoft TechNet. If you don’t get help that you need from there, you can try sending your question to us at [email protected] so we can publish it in the Ask Our Readers section of our weekly newsletter WServerNews and we’ll see whether any of the almost 100,000 IT pro subscribers of our newsletter may have any suggestions for you.