Windows Server NIC teaming: Does it really boost performance?

I have been using the Windows Server NIC teaming feature in my lab and production environments ever since the release of Windows Server 2012. I had always assumed that NIC teaming would give my servers a performance boost, although admittedly I had never taken the time to do any benchmark comparisons. Recently, however, I began to notice that NIC teaming might not be doing what I always thought that it did. Let me explain.

What is Windows Server NIC teaming?

For those who might not be familiar with Windows Server NIC teaming, it is a mechanism that allows multiple physical NICs to be bound together into a single logical NIC. That logical NIC therefore has the capabilities of all the underlying physical hardware.

You can see the end result in the figure below. This figure shows an administrative PowerShell window on a lab server that is running Windows Server 2012 R2. As you can see, I used the Get-NetAdapter cmdlet to list all of the network adapters that are installed in the server.

Windows Server NIC teaming
The cmdlet used in the figure above displays five different network adapters. The first one on the list (vEthernet) is a Hyper-V virtual Ethernet adapter, and it has a link speed of 10Gbps. I don’t actually have any 10Gbps hardware installed, but it doesn’t matter because this NIC is purely virtual.

The next three NICs that are listed are physical NICs running at 1Gbps each. The last network adapter on the list is named MyTeam. This one is a NIC team that is made up of the three physical network adapters that are listed by the cmdlet. As previously mentioned, those three NICs run at 1Gbps each. The NIC team, therefore, has a listed speed of 3Gbps, which is the aggregate speed of the physical NICs that make up the NIC team. So far, so good, right?

3Gbps throughput?

But have I really created a 3Gbps NIC? Think about that one for a moment: The Ethernet standards generally define link speeds in increments of 10. The Fast Ethernet standard that everyone was using in the 1990s ran at 100Mbps. Today, of course, we have 1Gbps and 10Gbps Ethernet. Yes, there are some fairly abnormal speeds beyond 10Gbps, but the point is that there is no defined 3Gbps standard. So how is it that we have a 3Gbps link?

The simple explanation is that we don’t really have a 3Gbps link. Instead, we have three separate 1Gbps links, and 1Gbps is a clearly defined Ethernet standard. If that’s true, however, then the next logical question is whether Windows is really giving us 3Gbps connectivity. PowerShell says that we have a 3Gbps link, but are we really getting 3Gbps throughput?

This is where things start to get a little bit weird. In the time that has passed since the release of Windows Server 2012 (when NIC teaming was introduced), I have read numerous articles, blog posts, etc. that indicated that NIC teams aggregated the available bandwidth. Indeed, that is what PowerShell seems to indicate is happening. However, the idea that Windows Server NIC teaming provides aggregate bandwidth is only partially true. Let me show you what I mean.

The figure below shows what happened when I ran a utility called LAN Speed Test (Lite) on the desktop computer that I am using to write this article right now. Incidentally, this is a free and completely portable utility that you can easily use to try this experiment out for yourself.

Windows Server NIC teaming
At any rate, my desktop computer does not have a NIC team installed. It uses a single, 1Gbps adapter. I configured the utility to transfer a file to a file share residing on a computer that is configured to use a NIC team. In other words, the file transfer speed would have been limited to 1Gbps because of the speed of the NIC in my desktop computer.

The test evaluated both the write (upload) speed and the read (download) speed. I won’t bore you with all of the statistics, but the upload speed was roughly about 711Mbps and the download speed was about 732Mbps.

Next, I ran the test again. This time, I connected to the same share from a server that had NIC teaming enabled. Both systems had a 3Gbps NIC team, so the transfer speeds should theoretically not be limited to 1Gpbs if Windows was performing true bandwidth aggregation. Here are the results.

Windows Server NIC teaming
As you can see, the upload speed was only about 458Mbps, significantly slower than when I ran the test from a machine with no NIC team. The download speed was about 764Mbps, which was a little bit faster than the previous test.

Admittedly, this was not a scientific test. Although no major workloads were running on either machine, I did not go to great lengths to ensure that everything was configured identically on both machines. Even so, the important takeaway here is that neither machine exceeded 1Gbps in spite of having a 3Gbps NIC team.

So, what do we really have?

Based on my observations, Windows Server NIC teaming does not seem to provide true bandwidth aggregation. What they do seem to do, however, is to perform load balancing. While a NIC team may never distribute a single traffic stream across multiple NICs, it can perform load balancing by assigning different traffic streams to different NICs. NIC teams are also useful from a fault tolerant standpoint. In fact, it is possible to designate a spare NIC within a NIC team. The spare NIC will be automatically used in the event of a NIC failure.

Photo credit: Shutterstock

About The Author

11 thoughts on “Windows Server NIC teaming: Does it really boost performance?”

  1. Brien Posey, don’t take this as mean spirited, but the only thing this article and your test show is that you don’t understand how Ethernet NIC Teaming (Link Aggregation) works.

    In order to see an increase in throughput when only sending data between two systems you’ll have to do one of the following:

    1. Use an Ethernet Switch that supports LACP, and then configure the Switch and NIC Team on both test systems to utilize LACP.

    OR

    2. Use the Switch Independent mode (which is default and what you’re already using) but then add additional IP Addresses to the NIC team on each system. Configure the NIC Team to use a load balancing algorithm that factors in IP Address pairs. And then perform simultaneous tests between IP Address pairs.

    OR

    3. Since you’re using Hyper-V, set things up as in #2 however instead of assigning multiple IP Addresses to the host system, you could initiate simultaneous transfers from several VMs to various other systems.

    You can control the NIC team settings using the LoadBalancingAlgorithm and TeamingMode parameters of the Set-NetLbfoTeam PowerShell cmdlet.

    LACP is usually the correct answer in a production enterprise environment, provided you have control of all the hardware.

    1. Hi Robert,
      No offence taken. You make some valid points, and of course I would have used one of those techniques or some variation of them if this were a real world situation. However, that was not the point of the article. What the article was designed to illustrate is that Windows by itself does not provide a boost in throughput to a SINGLE traffic stream using NIC teaming. Therefore, this is not a true multi-gigabit connection. There would obviously be a performance boost if multiple parallel traffic streams existed, and if load balancing was being used, but that wasn’t the point of the article. The article was intended to dispel one of the common myths about the way that NIC teaming works.

  2. From what I see, NIC Teaming won’t show improvements on a single stream/user.

    What I see as the advantage, is offering 3Gbps of available bandwidth spread out over multiple users.

    For example, 2 users may only saturate 2Gbps of bandwidth because they only have a 1Gbps link each, or 3 users could max the connection out at 3Gbps with each having a 1Gbps NIC but 1 sole user won’t be able to saturate the entire 3Gbps connection.

    Or 5 users:

    User 1 and 2 share NIC1 at 500Mbs each
    User 3 saturates NIC2 at 1Gbps
    User 4 and 5 share NIC3 at 500Mbs each

    Add a 6th user and NIC Teaming automatically splits NIC2, sharing 500Mbs each for user 3 and 6 since NIC2 would be the logical choice for bandwidth balancing/sharing.

  3. Hi Zachary,
    It has been a while since I wrote the article, and I cannot remember how I set it up. I’m pretty sure I used link aggregation, but I’m not positive.

  4. I’ve just completed commissioning a small server 2016 cluster (HPE) and had initially configured with teaming (10gbe with Mellanox sn2000 series switches). I ended up disabling teaming because it was costing us performance (dedicated backend for a high bandwidth data generator – so not a lot of users but a LOT of data)… our max read/write rates dropped from about 900MB/s to about 500MB/s with teaming enabled. If the back end storage had been fast enough to saturate a 10gbe link I’d have spent more time trying to figure it out… but since 900MB/s is not maxing out a single 10gbe interface I didn’t dig too deeply into why teaming was such a performance problem.

  5. If using LACP make sure to only use a team of 2, 4 or 8 NIC. Algorithm makes no sense otherwise. Still suprised it supports it to this date..

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top