Throttling Bandwidth through QoS (Part 4)

If you would like to read the other parts please read:

So far in this article series, I’ve talked about QoS and how it can be used to regulate the traffic flow across a network. Each of the articles in this series so far has dealt with QoS from the perspective of time sensitive traffic, such as voice or video transmissions, flowing over a high-quality network. In this article, I want to turn my attention to the ways in which QoS is used to regulate traffic flowing over slow or unreliable links.

QoS and Modems

In this age of almost universal broadband availability, it almost seems weird to be talking about modems. Even so, there are still a lot of small businesses and home users who use modems as a mechanism for connecting to the Internet. Recently, I’ve even seen a large Corporation using modems to communicate with satellite offices that are located in remote areas where broadband coverage is not available.

Obviously, the biggest problem with using modems is the limited amount of bandwidth that they provide. A less obvious, but equally important problem is that users typically do not change their online behavior when using a modem link. Sure, a user might be reluctant to download a large file when connected to the Internet via modem, but the rest of the user’s behavior often remains the same as when they are using a broadband connection.

Typically, users think nothing of leaving Microsoft Outlook open all the time, and surfing the Internet while they download a file in the background. Some users may even opt to have an instant messaging client open too. The problem with this type of behavior is that each of these applications or tasks consumes some amount of Internet bandwidth.

To see how QoS can help, let’s take a look at what happens under normal circumstances when QoS is not in use. Normally, the first application to attempt to access the Internet is granted exclusive use of the connection. This doesn’t mean that no other applications can use the connection, but rather that Windows assumes that no other applications will use the connection.

Once the connection has been established, Windows begins dynamically adjusting the TCP receive window size. The TCP receive window size refers to the amount of data that can be sent before waiting for confirmation that the data was received. The larger the TCP receive window size, the more packets that a sender can transmit before having to wait for acknowledgment of successful receipt.

The TCP receive window size must be adjusted carefully. If the TCP receive window is set too small, and efficiency will suffer because TCP is requiring very frequent acknowledgments of receipt. If the TCP receive window is set too large though, then a machine will have transmitted a lot of data before learning that there was a problem with the transmission. This results in the retransmission of large amounts of data, which also impacts efficiency.

When an application begins using a dial-up Internet connection, Windows will dynamically adjusts the TCP receive window size as packets are sent. Windows goal in this is to achieve a steady state in which the TCP receive window size is set optimally.

Now, suppose that a user opens a second application that also requires Internet connectivity. Upon doing so, Windows initiates the TCP slow start algorithm, which is the algorithm that is responsible for adjusting the TCP receive window size to an optimal value. The problem is that TCP is already being used by the application that was already running. This impacts the second application in two ways. First, the second application takes much longer to achieve an optimal TCP receive window size. The second way that this problem impacts the second application is that its data transmission rate will always be slower than that of the application that was opened first.

The good news is that you can get around this problem in Windows XP and in Windows Server 2003 by simply enabling the QOS packet scheduler. Upon doing so, the QOS packet scheduler will automatically use something called Deficit Round Robin anytime that Windows detects a slow link.

Deficit Round Robin works by dynamically creating a separate queue for each application that requires Internet access. Windows services these queues in round Robin fashion, which greatly improves the efficiency of all of the applications that need to access the Internet. In case you’re wondering, Deficit Round Robin was also available in Windows 2000 Server, but is not enabled automatically.

Internet Connection Sharing

In Windows XP and Windows Server 2003, QoS also assists with Internet Connection Sharing. As you probably know, Internet Connection Sharing is a simplified way of creating a NAT based router. The computer to which the Internet connection is physically connected acts as both a router and a DHCP server for the other computers on the network, thus allowing them to access the Internet through that host. Internet Connection Sharing is typically used only on small, peer-to-peer networks that do not have a domain infrastructure in place. Larger networks typically use hardware based routers, or the Routing and Remote Access Services.

In the section above, I already explained how Windows dynamically adjusts the TCP receive window size. This dynamic adjustment can backfire when Internet Connection Sharing is used though. The reason for this is that the connection between the computers on the local network is usually relatively fast. Typically this connection might consist of 100 Mb Ethernet, or of an 802.11G wireless link. Although these types of connections are far from being the fastest connection types that are available, they are far faster than most of the Internet connections that are available in the United States. Herein lies the problem.

The client computer needs to communicate across the Internet, but it can’t do so directly. Instead, it uses the Internet Connection Sharing host as a proxy. When Windows calculates the optimal TCP receive window size, it does so based on the speed of the link between the local machine and the Internet Connection Sharing machine. The difference between the amount of data that the local machine can actually receive from the Internet, and the amount of data that the machine thinks it can receive based on the speed of the connection to the Internet Connection Sharing host can cause problems. Specifically, differences in link speed can potentially cause data to backup in the queue connected to the slow link.

This is where QoS comes into play. If you install the QOS packet scheduler on to the Internet Connection Sharing host, then the Internet Connection Sharing host will override the TCP receive window size. What this means is that the Internet Connection Sharing host will set the local hosts TCP receive window size to the same size that it would be if it were directly connected to the Internet. This alleviates the problems caused by mismatched network speeds.

Conclusion

In this article series, I have talked about QoS and how can be used to shape traffic flows over various types of network links. As you can see, QoS can make a network perform much more efficiently by shaping traffic to take advantage of lulls in network activity, while guaranteeing fast delivery of high priority traffic.

If you would like to read the other parts please read:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top