QoS in Windows Server 2012 (Part 2)


If you would like to read the other parts in this article series please go to:

Introduction

In my previous article, I explained that Quality of Service (QoS) is a networking standard, and that Microsoft has offered QoS support within the Windows operating system since Windows 2000. That being the case, it is easy to dismiss Windows Server 2012’s support for QoS as being nothing more than a legacy feature that is still being supported. However, QoS has evolved to meet today’s bandwidth reservation related needs.

Legacy Bandwidth Management

In order to truly appreciate how QoS has been improved in Windows Server 2012, you have to understand some of the QoS limitations in previous versions of the Windows Server operating system. In the case of Windows Server 2008 R2, QoS could only be used to enforce maximum bandwidth consumption. This type of bandwidth management is also sometimes referred to as rate limiting.

With careful planning it was often possible to achieve effective bandwidth management even in Windows Server 2008 R2. However, in the case of Hyper-V it was impossible to achieve granular bandwidth management for an individual virtual machine.

Granular Bandwidth Management

The reason why granular bandwidth management is so important within a virtual datacenter is because virtual machines produce at least four different types of traffic. Limiting bandwidth consumption for all four types of network traffic in a consistent way can sometimes be counterproductive.

To show you what I mean, here are the four main types of network traffic that can be produced by virtual machines in a Hyper-V environment:

  • Normal network traffic – This is network traffic that flows between the virtual machine and other servers or workstations on the network. These machines can be both physical and virtual.
  • Storage traffic – This is the traffic that is generated when virtual hard disk files reside on networked storage rather than directly on the host server that is running the virtual machine.
  • Live migration traffic – This is the traffic that is created by the live migration process. It typically involves storage traffic and traffic between two host servers.
  • Cluster traffic – There are several different forms of cluster traffic. Cluster traffic can be the traffic between a cluster node and a cluster shared volume (which is very similar to storage traffic). It can also be inter-node communications such as heart beat traffic.

The point is that network traffic within a virtual datacenter can be quite diverse. Because of this, the type of bandwidth management provided by QoS in Windows Server 2008 R2 simply does not lend itself well to virtual datacenters.

There are two reasons why the concept of bandwidth rate limiting doesn’t work so well for virtual machines. For one thing, limiting a virtual machine to using a certain amount of bandwidth might lead to unnecessary performance problems. Suppose for instance that a host server had a 10 gigabit connection and you limited a particular virtual machine to consuming 1 gigabyte of bandwidth. By doing so, you could prevent the virtual machine from robbing bandwidth from other virtual machines, but you also prevent the virtual machine from using surplus bandwidth. Imagine for instance that at a given point in time there were seven gigabits of available bandwidth, but the virtual machine was only able to use one gigabit even though it could benefit from additional bandwidth and the additional bandwidth could be provided at that moment without taking anything away from other virtual machines.

Of course the opposite is also true. Without proper planning, limiting bandwidth can lead to bandwidth deprivation for specific virtual machines. Suppose for example that a host server is running twelve virtual machines and that those virtual machines all share a single, ten gigabit network adapter. Now let’s suppose that you were to configure each virtual machine so that it can never consume more than 1 gigabit of network bandwidth.

Given the fact that the host server is running twelve virtual machines, the server’s bandwidth has actually been over committed at that point. During a period of high demand, each virtual machine will try to use up to 1 gigabit of network bandwidth. Because the physical hardware cannot provide a full twelve gigabits of bandwidth, some of the virtual machines could end up suffering from poor performance because they are unable to get the bandwidth that they need.

QoS in Windows Server 2012

As I previously explained, the Windows Server 2008 R2 implementation of QoS isn’t exactly a bandwidth reservation system (even though QoS is technically a bandwidth reservation protocol). Instead, it can be thought of more as a bandwidth throttling solution. In other words, Windows Server 2008 R2’s QoS implementation allows an administrator to dictate the maximum amount of bandwidth that a virtual machine can consume. This is similar to the technology that Internet Service Providers (ISPs) use to offer various rate plans. For example, my own ISP offers a 7 megabit, ten megabit, and a fifteen megabit package. The more you pay, the faster the Internet connection that you get.

Even though the concept of bandwidth throttling still exists in Windows Server 2012, Microsoft is also introducing a concept known as minimum bandwidth. Minimum bandwidth is a bandwidth reservation technology that makes is possible to make sure that various types of network traffic always receive the bandwidth that they need. This is really what QoS was designed for in the first place.

Obviously the biggest benefit to using this approach is that the concept of minimum bandwidth makes it possible to reserve bandwidth in a way that ensures that each virtual machine receives enough bandwidth to do its job. However, that is not the only benefit.

A second benefit is that Windows Server 2012 will make it possible to differentiate between the various types of network traffic that are produced by virtual machines. For example, an administrator could theoretically reserve more bandwidth for storage traffic than for regular virtual machine traffic.

Arguably the greatest benefit however, is that minimum bandwidth reservations are different from bandwidth caps. Although it is still possible (and sometimes necessary) to set bandwidth caps, minimum bandwidth settings do not cap bandwidth consumption.

Let’s assume for example that you wanted to reserve 30% of your network bandwidth for virtual machine traffic, and the remaining 70% of bandwidth for things like live migration and storage traffic. If you don’t have any live migrations happening at the moment then you might not need any bandwidth for live migrations at all. It would be silly to lock up that bandwidth to prevent it from being used for other types of network traffic.

In this type of situation, the virtual machine traffic receives the 30% of the network bandwidth that has been reserved for it. If the virtual machine traffic could benefit from additional bandwidth at the moment and bandwidth is not presently being consumed by the other services that hold a reservation then that bandwidth is made available to virtual machine traffic until it is needed by one of the other traffic types in order to fulfill the minimum bandwidth reservation. Of course I am only using virtual machine traffic as an example. The concept applies to any type of traffic.

Conclusion

Now that I have introduced the concept of bandwidth reservation, I want to turn my attention to QoS implementation, which I will talk about in the next article in this series.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top