Throttling Bandwidth through QoS (Part 1)

If you would like to read the other parts in this article series please go to:
Throttling Bandwidth through QoS (Part 2).

If you would like to be notified when Brien M. Posey releases the next part of this article series please sign up to the Real time article update newsletter.

One of the biggest trends in networking today is the convergence of voice and video onto traditional data networks. One of the problems of this type of convergence is that in order to work properly, voice and video related packets need to be delivered to the recipient quickly and reliably, without jitter or excessive latency. At the same time though, this type of traffic must not interfere with the delivery of more traditional data packets.

One solution to this problem is to use QoS. QoS, or Quality of Service, is a packet prioritization technology. Essentially, QoS allows you to treat time sensitive packets with a higher priority than other packets.

QoS is an industry standard, not one of Microsoft’s proprietary standards. Even so, Microsoft first introduced QoS with Windows 2000. Microsoft’s version of QoS has evolved quite a bit since then, but still conforms to industry standards.

In Windows XP Professional, QoS works primarily as a mechanism for reserving bandwidth. When QoS is enabled, an application is allowed to reserve up to 20% of the total network bandwidth provided by each of the machine’s network adapters. The amount of bandwidth that an application can reserve is adjustable though. I will show you how to change the amount of reserved bandwidth in Part 3.

To see how the reserved bandwidth is used, suppose that you had a video conferencing application that requires high priority bandwidth in order to function properly. Assuming that this application was QoS enabled, it would reserve 20% of the machine’s overall bandwidth, leaving 80% of the bandwidth for the rest of the network traffic.

The applications other than the video conferencing application use what is known as best effort delivery. This means that packets are sent in a “first come, first served” manner. On the other hand, the video conferencing application’s traffic will always take priority over the other traffic, but the application will never be allowed to consume more than 20% of the total bandwidth.

However, just because Windows XP sets aside bandwidth for high priority traffic, it doesn’t mean that normal priority applications can’t also use the reserved bandwidth. After all, a video conferencing application definitely benefits from high priority, reserved bandwidth, but the chances of a video conferencing application being in use at all times is pretty slim. That being the case, Windows will allow the other applications to use both the reserved bandwidth and the non-reserved bandwidth for best effort delivery, so long as the application that the bandwidth is reserved for is not in use.

As soon as the video conferencing application is launched Windows begins to enforce the reservation. Even then, the reservation is not absolute. Suppose that Windows reserved 20% of the network bandwidth for a video conferencing application, but that the application doesn’t need all 20%. In such cases, Windows will allow the other applications to use any leftover bandwidth, but will constantly monitor the high priority application’s bandwidth needs. Should the application require more bandwidth, the bandwidth will be assigned to it, up to the full 20%.

As I mentioned earlier, QoS is an industry standard, not one of Microsoft’s proprietary technologies. As such, QoS is implemented within Windows, but Windows can’t do the job by itself. In order for QoS to work, every hardware component between the sender and the receiver must also support QoS. This means that NICs, switches, routers, and anything else that might be in use must all be QoS aware, as must the sender and the receiver’s Operating Systems.

In case you are wondering, you don’t have to implement some kind of crazy, exotic network infrastructure in order to use QoS. Asynchronous Transfer Mode (ATM) is an ideal networking technology for use with QoS, because it is a connection oriented technology, but you can use QoS with other technologies such as Frame Relay, Ethernet, and even Wi-FI (802.11x).

The reason why ATM is such an ideal choice for QoS is because it is able to enforce bandwidth reservations and allocate resources at the hardware level. These types of resource allocations are beyond the capabilities of Ethernet and other similar networking technologies. That doesn’t mean that QoS can’t be used. It only means that QoS has to be implemented differently than it would be in an ATM environment.

In an ATM environment, resources are allocated on the fly, at the hardware level. Since Ethernet and similar technologies can’t allocate resources in this way, these types of technologies rely on prioritization rather than on true allocation. What this means is that bandwidth reservations take place at a higher level within the OSI model. Once the bandwidth has been reserved, the higher priority packets are transmitted first.

One important thing to keep in mind if you are thinking about implementing QoS over Ethernet, Wi-Fi, or something similar is that these technologies are connectionless. This means that the sender has no way of monitoring the state of the recipient or the state of the network between the sender and the recipient. This means that the sender can guarantee that higher priority packets are transmitted before lower priority packets, but the sender cannot guarantee that the packets will arrive within a specific amount of time. In contrast, QoS is able to make these types of guarantees on an ATM network because of the fact that ATM is connection oriented.

Windows 2000 vs. Windows Server 2003

Earlier I mentioned that Microsoft first introduced QoS in Windows 2000, and that Microsoft’s QoS implementation has evolved significantly since then. That being the case, I wanted to wrap things up by talking a little bit about the differences between QoS in Windows 2000 and in Windows XP and Windows Server 2003 (which share a similar implementation).

The Windows 2000 implementation of QoS was based on the Intserv architecture, which is not supported by Windows XP or Windows Server 2003. The reason why Microsoft chose to abandon this architecture was because the underlying API was difficult to use, and the architecture has scalability problems.

Some organizations still use Windows 2000, so I wanted to give you a little bit of information about how the Windows 2000 QoS architecture works.  Windows 2000 uses a protocol called RSVP to reserve bandwidth resources. Once bandwidth has been requested, Windows must determine when the packets can be sent. To accomplish this, Windows 2000 uses a signaling protocol called SBM (Sunbelt Bandwidth manager) to notify the sender that it is ready to receive the packets. The Admission Control Service (ACS) verifies that sufficient bandwidth is available and then either grants or denies the request for bandwidth.

The overall process is a little bit more involved than this, but these are the primary areas in which Windows 2000 differs from Windows Server 2003 and Windows XP. Windows 2000, 2003 and XP all use similar traffic control mechanisms, which I will discuss in Part 2.


In this article, I have explained that packets associated with voice and video transmissions must typically be delivered at a faster rate than normal data packets in order to prevent jitter. I then went on to explain how a technology called QoS can be used to help ensure that voice and video traffic are delivered smoothly and efficiently. In Part 2 of this series, I will explain how QoS works.

If you would like to be notified when Brien M. Posey releases the next part of this article series please sign up to the Real time article update newsletter.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top