Windows NIC Teaming using PowerShell (Part 3)

If you would like to read the other parts in this article series please go to:

 Introduction

Windows NIC Teaming, the built-in load-balancing and failover (LBFO) solution included in Windows Server 2012 and Windows Server 2012 R2, allows you to aggregate network bandwidth across multiple network adapters to increase throughput. It can also help ensure application availability by providing traffic failover support in the event of network component failover. What you might not have realized however is that Windows NIC Teaming can provide benefit in both physical and virtual environments. In other words, you can create NIC teams both from physical network adapters in a physical server and from virtual network adapters in a virtual machine.

We’ve already examined some of the scenarios where you might implement Windows NIC Teaming using different teaming modes and load-balancing modes. These different modes can be configured both from the UI and using Windows PowerShell and it’s important to use the right modes that meet the needs of your particular scenario. But before you jump in and start creating NIC teams on either your physical servers or on virtual machines running on Hyper-V hosts, it’s important to ensure you understand the various considerations and limitations involved in implementing Windows NIC Teaming in physical or virtual environments.

Considerations for physical servers

If you simply create a NIC team from a couple of on-board Gigabit network adapters in one of your physical servers, you probably won’t have any problems. Windows NIC Teaming will work just like you expect it to work.

But if you’ve got a high-end server that you’ve bought a couple of expensive network adapter cards for, and you’re hooking the server up to your 10 GbE backbone network, and you want to ensure the best performance possible while taking advantage of advanced capabilities like virtual LAN (VLAN) isolation and Single-Root I/O Virtualization (SR-IOV) and so on, then Windows NIC Teaming can be tricky to set up properly.

Supported network adapter capabilities

To help you navigate what might possibly be a minefield (after all, if your server suddenly lost all connectivity with the network your job might be on the line) let’s first start by summarizing some of the advanced capabilities fully compatible with Windows NIC Teaming that are found in more expensive network adapter hardware:

  • Datacenter bridging (DCB) – An IEEE standard that allows for hardware-based bandwidth allocation for specific types of network traffic. DCB-capable network adapters can enable storage, data, management, and other kinds of traffic all to be carried on the same underlying physical network in a way that guarantees each type of traffic its fair share of bandwidth.
  • IPsec Task Offload – Allows processing of IPsec traffic to be offloaded from the server’s CPU to the network adapter for improved performance.
  • Receive Side Scaling (RSS) – Allows network adapters to distribute kernel-mode network processing across multiple processor cores in multicore systems. Such distribution of processing enables support of higher network traffic loads than are possible if only a single core is used.
  • Virtual Machine Queue (VMQ) – Allows a host’s network adapter to pass DMA packets directly into the memory stacks of individual virtual machines. The net effect of doing this is to allow the host’s single network adapter to appear to the virtual machines as multiple NICs, which then allows each virtual machine to have its own dedicated NIC.

If the network adapters on a physical server support any of the above advanced capabilities, these capabilities will also be supported when these network adapters are teamed together using Windows NIC Teaming.

Unsupported network adapter capabilities

Some advanced networking capabilities are not supported however when the network adapters are teamed together on a physical server. Specifically, the following advanced capabilities are either not supported by or not recommended for use with Windows NIC Teaming:

  • 802.1X authentication – Can be used to provide an additional layer of security to prevent unauthorized network access by guest, rogue, or unmanaged computers. 802.1X requires that the client be authenticated prior to being able to send traffic over the network switch port. 802.1X cannot be used with NIC teaming.
  • Remote Direct Memory Access (RDMA) – RDMA-capable network adapters can function at full speed with low latency and low CPU utilization. RDMA-capable network adapters are commonly used for certain server workloads such as Hyper-V hosts, servers running Microsoft SQL Server, and Scale-out File Server (SoFS) servers running Windows Server 2012 or later. Because RDMA transfers data directly to the network adapter without passing the data through the networking stack, it is not compatible with NIC teaming.
  • Single-Root I/O Virtualization (SR-IOV) – Enables a network adapter to divide access to its resources across various PCIe hardware functions and reduced processing overhead on the host. Because SR-IOV transfers data directly to the network adapter without passing the data through the networking stack, it is not compatible with NIC teaming.
  • TCP Chimney Offload – Introduced in Windows Server 2008, TCP Chimney Offload transfers the entire networking stack workload from the CPU to the network adapter. Because of this, it is not compatible with NIC teaming.
  • Quality of Service (QoS) – This refers to technologies used for managing network traffic in ways that can meet service level agreements (SLAs) and/or enhance user experiences in a cost-effective manner. For example, by using QoS to prioritize different types of network traffic, you can ensure that mission-critical applications and services are delivered according to SLAs and to optimize user productivity. Windows Server 2012 introduced a number of new QoS capabilities including Hyper-V QoS, which allows you to specify upper and lower bounds for network bandwidth used by a virtual machine, and new Group Policy settings to implement policy-based QoS by tagging packets with an 802.1p value to prioritize different kinds of network traffic. Using QoS with NIC teaming is not recommended as it can degrade network throughput for the team.

Other considerations for physical servers

Some other considerations when implementing Windows NIC Teaming with network adapters in physical servers include the following:

  • A team can have a minimum of one physical network adapter and a maximum of 32 physical network adapters.
  • All network adapters in the team should operate at the same speed i.e. 1 Gpbs. Teaming of physical network adapters of different speeds is not supported.
  • Any switch ports on Ethernet switches that are connected to the teamed physical network adapters should be configured to be in trunk mode i.e. promiscuous mode.
  • If you need to configure VLANs for teamed physical network adapters, do it in the NIC teaming interface (if the server is not a Hyper-V host) or in the Hyper-V Virtual Switch settings (if the server is a Hyper-V host).

Considerations for virtual machines

You can also implement Windows NIC Teaming within virtual machines running on Hyper-V hosts in order either to aggregate network traffic or to help ensure availability by providing failover support. Once again however, there are some considerations you need to be aware of before trying to implement such a solution:

  • Each virtual network adapter must be connected to a different virtual switch on the Hyper-V host on which the virtual machine is running. These virtual switches must all be of the external type–you cannot team together virtual network adapters that are connected to virtual switches of either the internal or private type.
  • The only supported teaming mode when teaming virtual network adapters together is the Switch Independent teaming mode.
  • The only supported load balancing mode when teaming virtual network adapters together is the Address Hash load balancing mode.
  • If you need to configure VLANs for teamed virtual network adapters, make sure that the virtual network adapters are either each connected to different Hyper-V virtual switches or are each configured using the same VLAN ID.

We’ll revisit some of these considerations for teaming physical and virtual network adapters in later articles in this series. But now it’s time to dig into the PowerShell cmdlets for managing Windows NIC Teaming and that’s the topic of the next couple of articles in this series.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top