Hybrid Network Infrastructure in Microsoft Azure (Part 7)

If you would like to read the other parts in this article series please go to:

In part 1 of this series, I began the discussion about hybrid network infrastructure with some thoughts regarding what hybrid clouds are about, and then talked about some of the networking functionality that you get when you adopt Azure Infrastructure Services. There was also an introduction to the Datacenter Extension Reference Architecture Diagram that Tom put together with Jim Dial and several other people at Microsoft. In part 2, Tom joined in as co-author and we went over site to site VPNs and point to site VPNs. In part 3 we took a look at the Azure dedicated WAN link service, which goes by the name of ExpressRoute, and also discussed the Azure Virtual Gateway, which is located on the edge of your Azure Virtual Network and enables you to connect your on-premises network to an Azure Virtual Network.

Then in part 4, we spent the bulk of our time discussing what Azure Virtual Networks are and how they compare to the virtual networks we use in traditional on-premises Hyper-V installations. Part 5 went into a little more detail on Azure Virtual Networks and some special considerations you need to take into account. In Part 6, we discussed Azure Virtual Networks and external load balancers. We will now continue working our way through our list of network capabilities that are available in Azure at the time this article was written (always keeping in mind that Azure is constantly changing and growing and adding new functionalities):

Site to site VPNs

√ Point to Site VPNs

√ Dedicated WAN links

√ Virtual network gateways

√ Azure Virtual Networks

 Inter-virtual Network connectivity

External load balancers

> Internal load balancers

  • Network Security Groups
  • Virtual machine ACLs
  • Third party proxy firewalls
  • Dual-homed
  • Dedicated public IP addresses
  • Static IP addresses on virtual machines
  • Public addresses on virtual machines
  • DNS

Let’s move to the next two items on the list now and in this, Part 7, we will talk for a while about Internal Load Balancers. Since this is a fairly complicated topic that logically can be divided into two parts, we’ll address the first part: how internal load balancing works and how to configure it for virtual machines using Azure PowerShell, here in this installation. Then we’ll look at the second issue, how to configure ILB for Cloud Services by modifying the cloud services configuration file (.cscfg file).

Internal Load Balancers

In the immediately preceding installment of this series, we spent a bit of time on discussing the ins and outs of the Azure external load balancers. You should have taken away from that article what the external load balancing functionality does: it allows you to load balance incoming connections from the Internet. You also know that you would typically do this for web front-end servers so that you can maintain high availability to your services. If one of the front-ends becomes unavailable, then one of the other front-ends in the load balanced set can take over and accept the incoming connections. Of course, it’s a good deal more complicated than that in practice, but that’s the gist of it.

Load balancing for incoming connections from the Internet is nice, but you might have been feeling as if you were only getting part of the story, and wondering: what about virtual machines and services that don’t allow incoming Internet connections? Can you load balance those in the same way? Well, you’ll be happy to know that the short answer is “yes” (although you probably won’t be surprised to learn that, as with most IT matters, there are some limitations and caveats).

But in general, you can place internal load balancers on an Azure Virtual Network and load balance the services that are running on any of your virtual machines. Let’s take a look at how the internal load balancing feature works in Azure and then how you go about setting it up to work on your particular Azure deployment.

How internal load balancing works

The internal load balancing feature (ILB) allows you to load balance virtual machines that are on the same Azure Virtual Network or those that are on the same cloud service. The requests to the load balanced machines can come from other virtual machines located in Azure, or if you have a site-to-site VPN that connects your on-premises networks to an Azure Virtual Network, the requests can come from those on-premises machines.

There are a few scenarios where you might find the internal load balancing feature useful:

  • The first scenario is when you have a multi-tier application that has a front-end web role, with a middle tier that does the application processing and a back-end database tier. In this case, you could use the Internet load balancer to load balance incoming connections to the front-end web tier. Then you would use the internal load balancer to load balance connections coming from the front-end web tier to the middle tier. And if your application supports it, you can then also load balance the connections coming from the middle tier to the database tier. That’s a whole lot of load balancing going on.
  • A second scenario is one that is a possibility when you have a hybrid network connection that connects your on-premises enterprise network to the Azure Virtual Network. The connection could be a site-to-site VPN connection, or more likely, you would be using a high-speed dedicated WAN link, such as the one that you get when you use Azure ExpressRoute. In this scenario, you might have migrated one or more on-premises line of business applications to Azure Infrastructure Services. The virtual machines that used to be on-premises are now on an Azure Virtual Network. However, the clients remain on premises. You can use the Azure internal load balancer to load balance connections coming from the clients on the on-premises network to the virtual machines in Azure. This is a nice way to spread the workload.
  • A third, albeit admittedly unlikely scenario, is similar to the second one. The main difference is that instead of having a client connection that is coming from an on-premises network through a site-to-site VPN or through ExpressRoute, the connection would be from a single client system that’s connected to the Azure Virtual Network though a remote access VPN client connection (this is what Microsoft calls a “point-to-site” connection). These point-to-site connections are typically used for management purposes only, so this use case probably isn’t going to be a common one. Nevertheless, you should know that it’s available if you need it.

The article Getting Started Configuring an Internal Load Balancer on the Microsoft Azure web site shows you in detailed steps how to configure the Internal Load Balancer. You should take note that there are two different ways for you to do this. The one that you will need to use depends on whether you’re using Internal Load Balancing in Azure virtual machines or in Cloud Services. The following is an overview of how to proceed in each instance, but please see the article for more information.

Using PowerShell to configure ILB for virtual machines

The first way of configuring ILB is to use Azure PowerShell. Many IT pros today enjoy the speed and power of the command line interface and if you’re one of those, you’ll be happy to know that you can use Azure PowerShell – if the virtual machines are contained within an Azure Virtual Network.

To get information about the PowerShell cmdlets that are used to configure ILB, run these commands at the Azure PowerShell prompt:

  • Get-help New-AzureInternalLoadBalancerConfig –Full
  • Get-help Add-AzureInternalLoadBalancer –full
  • Get-help Get-AzureInternalLoadBalancer –full
  • Get-help Remove-AzureInternalLoadBalancer –full

Note that if you have an existing virtual network that has been configured for an affinity group, you won’t be able to use ILB with it.

There are three basic steps that are involved in setting up internal load balancing for your Azure virtual machines:

  1. First you will need to create an ILB instance to function as the endpoint for incoming traffic that you want to be load balanced across the servers of your set.
  2. Next you must add those endpoints that correspond to the VMs that you want to receive the incoming packets.
  3. Finally, you’ll have to configure the servers that are going to be sending the traffic to the virtual IP address of your ILB instance.

This procedure is simple in concept, but a little tedious in practice since you’ll be using PowerShell cmdlets and as with any command line or coding process, absolute accuracy is essential. Luckily you can easily copy the PowerShell commands for each of the steps from the article on the Microsoft web site that is linked above. There are also a couple of sample scenarios described in the article, one for load balancing with an Internet-facing multi-tier application and one for load balancing with a line-of-business application that is hosted in Azure.

Summary

In this, Part 7 of this comprehensive coverage of the hybrid network infrastructure in Microsoft Azure, we focused on the subject of internal load balancing and on how to use PowerShell to configure ILB for virtual machines that are contained within an Azure Virtual Network. In Part 8, we’re going to move on to how to configure ILB for Cloud Services by editing the .cscfg file, and then we’ll move on to the topic of Network Security Groups.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top