Hybrid Network Infrastructure in Microsoft Azure (Part 5)

If you would like to read the other parts in this article series please go to:

Introduction

In part 1 of this series, Deb began the discussion about hybrid network infrastructure with some thoughts regarding what hybrid clouds are about, and then talked about some of the networking functionality that you get when you adopt Azure Infrastructure Services. There was also an introduction to the Datacenter Extension Reference Architecture Diagram that Tom put together with Jim Dial and several other people at Microsoft.

In part 2 of the series, Tom joined in as co-author and we went over site to site VPNs and point to site VPNs. In part 3 we took a look at the Azure dedicated WAN link service, which goes by the name of ExpressRoute, and also discussed the Azure Virtual Gateway, which is located on the edge of your Azure Virtual Network and enables you to connect your on-premises network to an Azure Virtual Network.

Then in 4, we spent the bulk of our time discussing what Azure Virtual Networks are and how they compare to the virtual networks we use in traditional on-premises Hyper-V installations. We’ll continue that discussion as we keep working our way through our list of network capabilities that are available in Azure at the time this article was written:

  • Site to site VPNs
  • Point to Site VPNs
  • Dedicated WAN links
  • Virtual network gateways
  • Azure Virtual Networks
  • Inter-virtual Network connectivity
  • External load balancers
  • Internal load balancers
  • Network Security Groups
  • Virtual machine ACLs
  • Third party proxy firewalls
  • Dual-homed
  • Dedicated public IP addresses
  • Static IP addresses on virtual machines
  • Public addresses on virtual machines
  • DNS

However, before we move on to the next topic on the list, we need to talk a bit more about Azure Virtual Networks, because they are the keys to the kingdom when it comes not only to Azure IaaS deployments, but increasingly with Azure PaaS deployments as well.

Diving Deeper into Azure Virtual Networks

In part 4 of the series, we provided an overview of the advantages of Azure Virtual Networks, and how they work. We also talked about the concept of “standalone” virtual machines, which aren’t connected to an Azure Virtual Network.

Now that you understand those basics, it’s time to delve a little deeper. There are a number of interesting issues that you need to be aware of when working with Azure Virtual Networks, and the following are some of the things you’ll want to know about before diving into a design process for your IaaS services that you want to put into Microsoft Azure.

IP Addressing Issues

As we mentioned in the last article, when you create an Azure Virtual Network you will have a choice of using one of the three primary private address blocks that are available for private addresses. Once you choose the block you want to use, you then create subnets and put virtual machines onto the subnets. There is no service-level limitation on the number of subnets you can create. The only limitation is related to the IP-level constraints (that is to say, constraints that are caused by the limits defined by the subnetting process itself).

The default is to use one of the private IP address blocks. However, if you are currently using public IP addresses on your corporate network and you want to continue to use public addresses as part of your datacenter extension into Azure, you can do that. For more information on how to do this, please see the article “Public IP address space in a Virtual Network (Vnet)”.

Note that there are some IP address ranges that you cannot use. These include:

  • Multicast addresses (224.0.0.0/4)
  • Broadcast (255.255.255.255/32)
  • Loopback (127.0.0.0/8)
  • Autonet (169.254.0.0/16)
  • 68.63.129.16/32

Another important issue that’s related to IP addressing is IPv6. Now that we’ve exhausted our IPv4 address space, it’s pretty likely that many organizations will soon be feeling the pinch and will finally start thinking about moving to IPv6 addressing. Adoption of IPv6 has been painfully slow thus far, but the primary reason for this is that the benefits of moving to IPv6 didn’t seem to outweigh the risks in time and money that were required to make the change. The risk/benefit ratio seems to be moving in the other direction now, which is a good thing.

Here’s the kicker, though: Unfortunately, at this time, Azure Virtual Networks do not support IPv6. But remember, most organizations are not on IPv6 yet, so this state of affairs shouldn’t have a negative impact on too many people. With that said, with the rapid pace of changes that are coming about in Azure as a whole, and with the frequent updates that come to Azure Virtual Networks, there’s a good chance that we’ll be seeing IPv6 support in the not too distant future.

But let’s get back to how IP addressing works now. Remember that by default you will be assigned IP addresses through DHCP. However, if you want to have dedicated IP addresses of your choosing, you can assign those too. They are referred to as “Static Internal IP addresses” and you can learn more about them here.

These internal addresses are also known as “DIP” addresses (dynamic IP addresses from DHCP). There are two other types of addresses that you might be interested in, as well:

  • VIP – these are virtual public IP addresses that are used to connect to virtual machines from the Internet. They’re called virtual IP addresses because they are load balanced and highly available.
  • PIP – this is the term for “instance level public IP addresses” – now, why the acronym PIP was used for something that should have been called ILPIP, we don’t know 😉 These PIP addresses allow you to associate a public IP address with a particular virtual machine, which is helpful when you want to make sure that connections to a particular virtual machine go into and out of the same interface (which is needed to support complex protocols, such as some streaming media protocols and FTP).

Routing Behavior with Azure Virtual Networks

In the past, routing table configuration was for the most part a black box situation, where you had to work within the confines of default routing behavior within an Azure Virtual Network. However, there have been some recent updates to Azure Virtual Network capabilities and you can add custom routes. This was done to support new functionality that involved multihomed virtual appliances that you can put in Azure. You need to be able to control routing behavior if you want to make sure that packets are routed to and through your virtual network appliances.

For more information on routing and IP forwarding in Azure, please see the article “User Defined Routes and IP Forwarding”.

A key activity that almost all networking professionals use to determine the viability of their gateways is the ability to ping the gateway for the subnet on which the device is located. Unfortunately, you won’t be able to do this on an Azure Virtual Network. You could use the arp –g command to determine if the virtual machines assigned gateway is online, though.

Name Resolution

We talked a little bit about name resolution in the previous article. As a reminder, there are two primary types of name resolution services that you can take advantage of in an Azure Virtual Network:

  • The default name resolution system, which provides some basic DNS services for virtual machines located on the virtual network. This allows virtual machines on the same virtual network to communicate with one another without the help of any other DNS servers.
  • “Bring Your Own DNS”, which is what you get when you install your own DNS servers on the Azure Virtual Network and point your virtual machines to that DNS server. In most corporate installations, you’ll use the BYODNS option because you’ll want to be able to resolve names not only within an Azure Virtual Network, but also on the corporate network.

If you do bring your own DNS server, you need to define it at an Azure Virtual Network level. The good news is that you can assign up to 12 DNS servers to an Azure Virtual Network, which will then be assigned to the virtual machines that live on that virtual network. You don’t have to assign them all at once, either; you can assign some when you create the Azure Virtual Network and then add more later.

One thing to keep in mind is that you can’t add your own customer DNS suffixes. I know that many of you probably like to tweak your name resolution settings based on operating system capabilities in Windows, but at this time you’ll not be able to do that. Again, like all other things, if there’s a demand for this capability, I suspect that you’ll see it in Azure in the future. You never know.

Another caveat is that while you can assign specific IP addresses to virtual machines, you can’t do this for DNS server assignment. DNS server assignment has to be done at the cloud service level or the Azure Virtual Network level. That’s just how it works.

Summary

In this, Part 5 of this multi-part article, we spent a little more time discussing some finer points of using the Azure Virtual Network. We went over a few of the more interesting details of Azure Virtual Networks, including some key issues in IP addressing and name resolution. Next time we’ll move down the list and talk about other Azure networking capabilities and features. See you then!

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top