What’s New in Windows 8 for Hyper-V Based Cloud Computing (Part 9) – Hyper-V Networking Scenarios

If you would like to read the other parts in this article series please go to:

Windows Server 2012 Network Scenarios

In Windows Server 2012, Microsoft enhances networking features to provide better reliability, availability, scalability, security, and extensibility for customers in private and hybrid cloud scenarios. Understanding how to leverage these features may be a more complex endeavor than in past versions of the operating system, but they also make it possible to more easily adapt to changing requirements in private or hybrid cloud infrastructures while tuning network performance, scalability, and security.

NIC Technologies

Server network interfaces for the data center range in performance and capability. Most servers that you purchase today arrive with one or two 1GbE (Gigabit Ethernet) interfaces integrated into the server hardware. You can expand the number of network interface cards by adding additional 1GbE ports or by leveraging faster 10GbE NICs. These come in single, dual or quad port cards and provide great expandability, but come with the danger of an increased impact to services in the case of a network card failure.

Both 1GbE and 10GbE offer similar technologies to increase performance and reduce server processor utilization (e.g., RDMA, QOS, and offload technologies). A 1GbE network card is very inexpensive at both the NIC and switch port levels based on wide availability and time in the marketplace. A typical 1GbE managed switch with 24 ports costs around $500 (~$21/port). On the other hand, 10GbE technology provides an increase in throughput, but it does so at a higher price at both the NIC and the switch port. A typical 10GbE managed switch with 24 ports costs about $10,000 (~$417/port) today.

Another available network technology that is lesser known, InfiniBand, provides 32 Gb, 40 Gb, and 56 Gb interfaces. InfiniBand technology provides very high performance and low latency connections, and can be used for both LAN and SAN traffic. It supports similar technologies, like RDMA and QOS, but does not support teaming, and the network management technology is different that Ethernet. Even though InfiniBand is much higher performing than 10GbE, it comes at a cost that is typically lower. A 32 Gb InfiniBand managed switch with 36 ports costs around $8,000 (~$222/port).

Typical Implementation

Designing networking solutions for Hyper-V hosts should be based on some basic network design approaches:

  1. Ensure that the host management interface is dedicated
  2. Do not allow network management traffic on the public network interfaces for the virtual machines
  3. Add the appropriate number of NICs to handle virtual machine network workload
  4. Isolate cluster traffic: heartbeat, Live Migration, iSCSI
  5. Leverage vendor teaming solutions to provide NIC reliability

These parameters rely on the physical NIC as the basis for traffic isolation. This can result in design issues and also restrict potential server hardware choices based on the number of network interfaces that are required.

For example, a clustered Hyper-V host using iSCSI shared storage could require a minimum of 6 1GbE NICs (e.g., management, iSCSI, heartbeat, Live Migration, and 2 for virtual machine network traffic). In addition, if SAN-based storage is leveraged, typically two host bus adapters (HBAs) are leveraged to mitigate HBA failure. The number of NICs and HBAs limit the types and sizes of servers that can be used based on the number of required PCI slots.

Convergence: Management and Virtual Machine Traffic on a Single NIC

For situations where you have a limited number of PCI slots or where you want to leverage faster NICs, you can take the approach of converging traffic from multiple NICs down to a single NIC, combined with leveraging traffic isolation technologies to meet the recommended design approach. In this configuration, you can converge the iSCSI, management, heartbeat, Live Migration, and virtual machine 1GbE NICs into a single 10GbE or InfiniBand network card.

This approach may require leveraging VLANs for traffic isolation and QOS to ensure that the traffic is balanced across the converged network.

Pros

  • Reduces the number of required switch ports and NICs required for your implementation
  • Maintains traffic isolation
  • Provides QOS for bandwidth balancing

Cons

  • Single point of failure with only a single converged NIC for all traffic

Table 1

The major drawback of this approach is that a NIC failure can be disastrous. In short, converging traffic to a single NIC can be considered for hosting non-critical services, but it is a bad idea as an implementation for a data center where a loss of network connectivity to one or more critical services can result in large financial losses.

Convergence: Management and VM Traffic Isolation using 2 NICs

Taking a step back from the complete convergence approach, you can also implement a convergence solution that leverages two network adapters to separate host and virtual machine traffic. In this configuration, you would converge the iSCSI, management, heartbeat, and Live Migration host traffic, unloading them from 1GbE NICS unto a single 10GbE NIC. You can do the same for the virtual machine traffic and converge unto a single 10GbE NIC.

In this approach, you may still need to leverage VLANs for traffic isolation and QOS to ensure that network throughput is guaranteed across the converged networks.

Pros

  • Reduces the number of required switch ports and NICs required for your implementation
  • Maintains traffic isolation
  • Reduces the impact of a NIC failure
  • QOS for bandwidth balancing

Cons

  • Still a single point of failure with only a single converged NIC for management and host traffic, and another for virtual machine traffic

Table 2

This scenario, while shielding virtual machines from the effect of a host traffic NIC failure, still requires consideration for hosting non-critical services that can sustain a service interruption, and again is not recommended for hosting critical services.

Convergence: Increasing Reliability

In order to improve on the two-NIC convergence scenario so that it is more viable for critical services, NIC reliability must be introduced. This can be accomplished by teaming two 10GbE NICs using Load Balancing with Failover (LBFO). This provides the ability to combine the bandwidth of multiple NICs into a single effective network trunk, and mitigating against NIC failure. This implementation may still require the use of VLAN for traffic isolation and QOS for traffic balancing.

Pros

  • Reduces the number of required switch ports and NICs required for your implementation
  • Maintains traffic isolation
  • Eliminates NIC single point of failure
  • QOS for bandwidth balancing

Cons

  • Eliminates the use of InfiniBand since NIC teaming is not a supported configuration with that technology

Table 3

Convergence: Increasing Efficiency

After improving reliability, you may want to increase the efficiency of a converged network with two 10 GbE NICs in a teamed configuration. On the host side, you can leverage Receive Side Scaling (RSS) to increase efficiency and scale. You can also leverage Receive Segment Coalescing (RSC) to reduce the number of headers to process and reduce the I/O overhead of processing packets. On the virtual machine side, you can leverage VM queuing to increase efficiency and scale.

RSS, RSC, and VMQ are enabled by default when using Windows Server 2012 10GbE drivers. These technologies can also be managed and configured using PowerShell or WMI providing varied management options. You may be familiar with these technologies as Large Receive Offload (LRO) or Generic Receive Offload (GRO) as they are named in other operating systems and technologies.

Pros

  • Reduces the overhead of packet decoding and the number of packets to process
  • Provides queuing performance improvements for virtual machines
  • Works with NIC teaming
  • Enabled by default

Cons

  • Can be tricky to configure correctly if not using the default settings

Table 4

Convergence: Increasing Performance with SR-IOV

In some situations, even if you leverage the offloads that increase efficiency, processor utilization from network operations will start to affect the performance of the Hyper-V hosts. In these situations leveraging technologies like Single Root I/O Virtualization (SR-IOV) can offload processing from the host CPU, reduce latency, and increase network throughput. SR-IOV frees up processor cycles and allows that processing power to be used to handle more workload application operations. SR-IOV accomplishes this by bypassing the operating system stack and allowing a virtual machine to interface directly with a network adapter.

Pros

  • Reduces the overhead networking traffic on CPU
  • Reduces latency
  • Increases network performance
  • Can be teamed at the virtual machine level

Cons

  • Requires NICs with SR-IOV support
  • Cannot be teamed at the host level

Table 5

Planning ahead when you purchase network adapters and making sure they support SR-IOV will prevent you from having to replace them in the future if you need this level of performance.

Conclusion

With new and enhanced network features and supported hardware, Windows Server 2012 makes it possible to build reliable, scalable, and high-performance cloud environments with a variety of options that allow growing the infrastructure in the ways and at the pace required, and by folding in support for emerging technologies that are poised to become mainstream components. In Part 10 and 11 of this series, we will round out this wide-ranging overview of Windows Server 2012 by reviewing cloud disaster recovery technology and scenarios.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top