Low Latency: 4 contributors to your network’s most important asset

Do you drive? I do… almost every day, taking my daughters to school and to daycare. Some days, I can take my time, but other days, we’re running late and I wish we could just teleport — with little to no latency — to the places I need to get them.

But we don’t live in a Star Trek world… at least not yet… that I know of.

Every morning, I have to get in my car with the girls and get on the road. There are a number of different roads that I have to drive on to get me from my house to my oldest daughter’s school.

There are backroads, main roads, highways, byways–

All kinds of roads!

That makes me think of a network.

You have your local area networks, wide area networks, metropolitan area networks, home networks, public networks…

It goes on and on!

Just like a network is important for the applications that run on top of it, the roads are very important for the people like you and me who drive on it.

The most important thing to me on those roads is how fast I can safely get through these roads to get my daughters where they need to be. It’s a big asset to have roads that can get you where you need to go as fast, and safe, as possible.

I believe that a network’s most important asset is the same.

How fast can packets get across a network from one location to the next?

The network needs to have low latency to do this. Packets need to be quickly processed, queued, serialized, and propagated across the network to get to this.

There are four components that contribute to a network’s latency. But let’s talk about latency a bit first…

Heeere’s latency!

Quantum teleportation near zero latency

So network latency is the time that it takes packets to get from one end of the network, going through all of the paths, to the other end.

Those packets go through a number of different devices along the various parts of a network, just like I go through a bunch of street lights and signs along the different roads every morning taking my kids where they need to be.

Ideally, you want network latency to be as close to zero as possible.

But like I said before, unless you know something I don’t, there’s no Star Trek teleportation yet (though I have read that IBM and others are working on this with quantum teleportation).

Interesting stuff!

Okay. So latency won’t be anywhere close to zero unless the other end of the network is right next to the user or you’re reading this centuries from now and using quantum teleportation.

So because of no teleportation, getting my daughters where they need to be will take some time, just like it will take some time for packets to go across the network.

How much time depends on four main delay components of latency.

#1: Processing

Packet processing is like navigating using road signs

The first of these components is processing delay.

This form of latency is the time that it takes for a device on your network to process a packet that has arrived on one of its interfaces. The device — be it a router or a switch — must figure out based on the packet headers where the data is going and how to get it to the next stop, moving the packet closer to its destination.

When driving my daughters in the morning, I need to get on the highway. The roads I take to the highway have a number of street signs and lights on them that help me process where I need to steer the car in order to get to the highway, and eventually to their destinations.

Some signs are simple, but others can be a little complex.

The more time it takes me to process the signs, the longer it takes me to get to the next road on the path to my kids’ destinations.

Your network devices need to have the right resources to process packets as quickly as possible to get them onto the next stop. Hardware and software should not be the bottleneck for your routers and switches.

#2: Queuing

The next key component of latency is queuing delay.

This form of latency is the time that a packet is waiting in a network device’s queue along the network path, before it can be transmitted.

Packet queuing also increases latency

On your network, there is likely a lot going on — emails being sent, files being downloaded, or websites being browsed. All of this can generate a lot of network traffic, and thus a lot of packets.

To avoid congestion on the network, every packet cannot get sent at the same time. Otherwise, you’d also have a lot of dropped packets on your network.

And no one wants that!

So network devices must queue them up before they can go. This is done with queuing algorithms.

Two common types of algorithms found in network devices is first-in-first-out (FIFO) and fair queuing. I won’t go into the details of these algorithms, but the time that packets spend in a device’s queue using any queuing algorithm will add to the end-to-end latency experienced by application.

So the way to reduce this latency contributor is to utilize faster and more efficient queuing algorithms for your network.

#3: Transmission

The next component of latency is serialization delay.

This delay is also known as transmission delay because it is the time that it takes to put the bits of data onto a network’s link so it can be transmitted.

Data transmission

This is where the bandwidth capacity of a network link is important for an application. The speed of your link determines how fast the bits of a network packet can be put onto the link.

So let’s say you have a T3 WAN connection, which is about 45Mbps. And you have an application that has to send a 1MB file to a user.

Let’s do the math

The 1MB file will be sent on the network as a stream of about 667 packets, with each packet being about 1,500 bytes.

1,000,000 bytes/1,500 bytes per packet = 667 packets

Now you have 667 packets that need to be transmitted over this 45Mbps link.

Each packet is expected to take about 0.27 milliseconds to be put onto the link. And that’s without considering any of the previous latency contributors.

(1,500 bytes * 8 bits per byte)/45,000,000 bits per second =0 .00027 seconds

So altogether, the 1MB application data will take 0.18 seconds just to be put onto the link.

0.00027 seconds per packet * 667 packets = 0.18 seconds

That may not seem like a lot, but when you consider that more and more users are expecting applications to respond within 1 or 2 seconds, 0.18 seconds can be as much as almost 20% of that time. This post from APM vendor Dynatrace discusses that.

How can you reduce this?

As you can probably see from this example, the two best ways to reduce serialization delay is by sending less data or increasing your network’s bandwidth capacity.

I would recommend doing a combination of both if possible. But if you could only do one, send less data. That could mean changes in the applications that get used on the network or using various caching mechanisms.

#4: Propagation

The last and definitely not least contributor to latency is propagation delay.

Of all the four latency contributors, this is the only one that cannot be directly reduced.

Propagation delay is the time that it takes the bits of data sent to travel from one side of the network to the other side. This is simply the result of the distance between the client and the server.

If you have a server is New York City in the US and a user in Hyderabad, India, there’s nothing you can do about reducing the distance between them. We have these nuisances called oceans and continents that get in the way. So that makes things kind of hard to bring them closer together.

Unlike the other contributors, we cannot directly reduce the distance with better technology. And “directly” is the operative word here.

Propagation of packets

The propagation delay from New York City to Hyderabad is a function of the distance between the east coast of the United States and India. Better technology is not going to make the two countries be physically closer together.

With the speed of light in a vacuum traveling at about 186,000 miles per second — or 300,000 km per second — it will take roughly 10 milliseconds for light to travel 1,000 miles and back. That’s the theoretical value for propagation delay alone! It doesn’t account for the other forms of delay or anything else that can slow down a network packet from a client to a server.

So if there’s a user in Hyderabad using an application on a server located in New York City, which is about 8,047 miles (12,950 km) away, it is expected that the propagation delay will be about 81 milliseconds to get from one side to the other and back.

And that’s the best case scenario because network paths are rarely a straight line. Just look at that undersea cable map.

How many straight paths do you see?

Underwater telecommunication cables around the World - Propagation latency is proportional to distance
Underwater telecommunication cables around the World – Propagation latency is proportional to distance

So while you cannot directly reduce propagation delay, you can indirectly do something.

You can utilize content delivery networks (CDN) from companies like Akamai, Amazon, Cloudflare, and many others. You can also use WAN optimization devices to reduce the amount of data going over that distance or choose not send the data all because it was cached.

Or you can simply move the server from New York and put it in India. Moving the server itself closer to the users is always an option. It may be not be the most cost efficient or economic, but it’s an option to always consider.

Summing it all up!

So whether you manage the whole network or administer a part of it, I always say that its biggest asset is low latency.

You should analyze your network to ensure that you are optimizing the four contributors to latency:

  • You should make sure that processing isn’t a bottleneck on any of your network devices, so you can keep processing delay as low as it can be.
  • Make sure that you’re utilizing the most optimal queuing algorithm that fits your network and devices that run on it, so that queuing delay isn’t slowing down the network’s applications.
  • Ensure you have the appropriate level of bandwidth across your network that applications run on, so serialization delay is as fast as possible.
  • Verify that the distance between client and server is optimal, so that propagation delay minimally impacts applications.

With a network that has low latency across the board, applications will generally run better, and you’ll have happier users.

And less Helpdesk calls about the network being down.

Who doesn’t want that?!

So what has been your experience with latency? Leave a comment below.

==

Photo credits: Floydian, Hotrodnz, Geralt, Vegpuff, and Greg Mahlknecht

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top