Integrated Network Load Balancing (NLB) and Forefront Threat Management Gateway (TMG) 2010

Introduction

One of the high availability features supported with Forefront Threat Management Gateway (TMG) 2010 Enterprise edition is Network Load Balancing (NLB). NLB is a simple, yet highly effective solution to provide redundancy for network traffic handled by a TMG array. NLB also enables flexible scalability by making it easy to add nodes to an array to participate in load sharing of network communication. NLB is fundamentally a component of the operating system which Forefront TMG manages for its purposes, and is configured and managed primarily with the Forefront TMG management console. TMG integrates tightly with Windows NLB and includes additional intelligence to manage load balancing in the event an array member is unable to process traffic. In addition to the load balancing and scalability benefits that NLB provides, NLB also improves availability by allowing for node maintenance or rolling upgrades while maintaining system uptime. When performing updates, a node can be removed from the load-balanced cluster and returned to the array once it has been serviced. During this time, the other nodes in the array remain online to service production traffic. Configuring and enabling NLB is quick and easy, and integrates seamlessly in to any network environment as it requires no hardware changes to deploy.

NLB vs. Round-Robin DNS

NLB has several distinct advantages over round-robin DNS. Although round-robin DNS is even simpler to configure than NLB, it lacks the necessary intelligence to determine if a node is online and able to service requests. If a node is offline, it is entirely possible, and indeed quite likely, that a client will attempt to send a request to an offline node. This can result in serious delays and potential connectivity failure. By contrast, NLB maintains availability awareness for all cluster nodes through the use of a cluster heartbeat. If a node is offline, no traffic will be delivered to that host.

How NLB Works

In the default unicast operating mode, NLB works by creating a Virtual IP Address (VIP) and changing each node’s Media Access Control (MAC) address to use a shared cluster MAC address. NLB also prevents the switch from learning this MAC address, which forces the switch to deliver the frame to all switch ports. This induces switch flooding by design, and ensures that all nodes in the NLB cluster receive traffic destined for the VIP. NLB logic then determines which node will process the request, and the remaining nodes then silently discard the frame. NLB also supports a multicast operating mode, where each node receives a new multicast MAC address in addition to its original unicast MAC address. NLB keeps track of which nodes are online through the use of layer 2 broadcast heartbeats. These heartbeats occur every second, and if a node fails to respond after 5 seconds it is assumed to be offline and no traffic will be delivered to that node until it returns to service. The Forefront TMG 2010 firewall performs stateless network traffic inspection, however, NLB is stateless. To accommodate this, TMG-integrated NLB is configured in single affinity mode to ensure that network sessions are always handled by the same array member.

Enabling NLB

In the Forefront TMG 2010 management console, expand the Arrays node in the navigation tree, then expand the array and highlight Networking. In the Tasks pane, click Enable Network Load Balancing Integration.

Image
Figure 1

Network Load Balancing is configured on a per-network basis. Click Next when the Network Load Balancing wizard opens and select a network to enable NLB and then click Configure NLB Settings.

Image
Figure 2

Enter an IP address and subnet mask for the Primary VIP. This IP address must be on the same subnet as the dedicated IP address. Optionally you can choose to add additional VIPs if required. Leave the Cluster operation mode at the default setting of Unicast for now. We’ll talk more about operational modes later. Click Ok, then apply the changes.

Image
Figure 3

Once the configuration has been synchronized and the TMG services restarted, each node in the array will have its original MAC address overwritten with the cluster MAC address. You can view this behavior by comparing the value of ClusterNetworkAddress from the output of the command nlb display with the value of Physical Addresss from the output of ipconfig /all. To ensure uniqueness, the cluster MAC address is derived from the primary VIP. The cluster MAC will always begin with 02-BF and end with the primary VIP converted from decimal to HEX, which using our example of 172.16.1.240 is AC-10-01-F0.

Image
Figure 4

In addition, when you ping the VIP from an internal client you’ll see that the VIP resolves to the cluster MAC address.

Image
Figure 5

However, if you look at the MAC address table on the switch that the TMG servers are connected to, you’ll see that the cluster MAC address of 02-BF-AC-10-01-F0 is conspicuously absent. Again, this is by design. To prevent the switch from learning the cluster MAC address, the NLB driver modifies the source MAC address for all outbound traffic using a unique MAC address for each node. This MAC address is derived from the NLB host priority assigned to that node and replaces 02-BF with 02-<NLB host priority>. In this example, TMG1 has a host priority of 3, so the NLB driver uses a MAC address of 02-03-AC-10-01-F0 as the source MAC address for frames originating from this node. TMG2 has a host priority of 4, so its source MAC address is 02-04-AC-10-01-F0.

Image
Figure 6

You can see this behavior by monitoring the network traffic with a protocol analyzer. In this frame you’ll see that the destination MAC address for this is 02-BF-AC-10-01-F0, which is the cluster MAC address.

Image
Figure 7

However, when you look at the reply to this frame you’ll notice that the source MAC address is 02-04-AC-10-01-F0, indicating that the node with a host priority of 4 actually responded to this frame.

Image
Figure 8

NLB Cluster Operation Modes

As I mentioned earlier, NLB has several cluster operation modes – unicast (default), multicast, and IGMP multicast. When NLB is configured to operate in multicast mode, each node retains its original MAC address and the VIP is assigned a unique multicast MAC address. Multicast MAC addresses are derived the same way as unicast MAC addresses with the exception that they begin with 03-BF.

Image
Figure 9

Looking at the network behavior with a protocol analyzer in multicast mode you’ll see that the destination MAC address for this frame is the cluster MAC address.

Image
Figure 10

The source MAC address for the reply to this frame is the MAC address assigned to the physical network adapter in the TMG server.

Image
Figure 11

Choosing a Cluster Operation Mode

The default cluster operation mode is unicast. Forefront TMG also supports multicast mode. The drawback to using multicast mode is that it doesn’t work well many routers and layer 3 switches. The issue with the multicast operating mode is that ARP requests for the unicast VIP result in a reply from a multicast MAC address, which many routers and layer 3 switches refuse to accept. You can work around this issue by adding a static ARP entry on the router or layer 3 switch that maps the VIP to the multicast cluster MAC address. Taking it one step further you can select IGMP multicast and enable IGMP snooping on the switch. Use caution when selecting this mode as IGMP snooping can consume a lot of resources on the switch.

Recommendations and Best Practices

The generally accepted guidance for choosing a cluster operation mode is to keep the default unicast mode setting unless you have a specific reason to change it. Issues with switch flooding can be mitigated by using Forefront TMG deployment best practices, which dictate that TMG network interfaces should reside on dedicated subnets using isolated VLANs. If you’re still concerned about switch flooding or noisy cluster heartbeat broadcast traffic, you can choose multicast mode to help alleviate this. To ensure that the same array member always handles the correct network traffic, it is recommended to enable NLB on all networks with the exception of the intra-array network, if used. Web proxy clients can be configured to use the VIP to deliver requests to the array, but additional configuration will be required to leverage Kerberos authentication in this scenario. You can read more about enabling Kerberos authentication in load balanced scenarios here. Using the VIP for Firewall Clients can cause connectivity issues and is explicitly not supported. Machines with the Firewall Client installed can only leverage DNS round robin to provide high availability. For more information regarding high availability for the Firewall Client, click here.

Summary

Network Load Balancing is an effective, low-cost solution to provide high availability for a Forefront TMG enterprise array. It can be leveraged to provide essential redundancy and improve system uptime and can be deployed without having to make changes to the underlying network infrastructure. NLB has several operating modes that can be used to tune network behavior based on your requirements. Although unicast mode is fine for most deployments, multicast operations modes can be used to address concerns caused by switch flooding. It’s important to remember that although web proxy clients can use the VIP for their requests, Firewall Clients cannot, so plan accordingly. If you’re using Forefront TMG 2010 Enterprise edition, enable and configure NLB today to get the highest availability for your web proxies and firewalls.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top