Hybrid Network Infrastructure in Microsoft Azure (Part 10)

If you would like to read the other parts in this article series please go to:

Introduction

In part 1 of this series, I began the discussion about hybrid network infrastructure with some thoughts regarding what hybrid clouds are about, and then talked about some of the networking functionality that you get when you adopt Azure Infrastructure Services. There was also an introduction to the Datacenter Extension Reference Architecture Diagram that Tom put together with Jim Dial and several other people at Microsoft. In part 2, Tom joined in as co-author and we went over site to site VPNs and point to site VPNs. In part 3 we took a look at the Azure dedicated WAN link service, which goes by the name of ExpressRoute, and also discussed the Azure Virtual Gateway, which is located on the edge of your Azure Virtual Network and enables you to connect your on-premises network to an Azure Virtual Network.

Then in part 4, we spent the bulk of our time discussing what Azure Virtual Networks are and how they compare to the virtual networks we use in traditional on-premises Hyper-V installations. Part 5 went into a little more detail on Azure Virtual Networks and some special considerations you need to take into account. In Part 6, we discussed Azure Virtual Networks and external load balancers. In Part 7, we began our discussion on the subject of internal load balancing and on how to use PowerShell to configure ILB for virtual machines that are contained within an Azure Virtual Network. In Part 8, we moved on to how to configure ILB for Cloud Services by editing the .cscfg file, and then talked about Network Security Groups

In part 9 we talked about virtual machine ACLs, which you can use to allow selective remote access to virtual machines you place on Azure Virtual Networks. We also talked about some alternatives to using virtual machine ACLs, which in many ways can be more secure than configuring ACLs. In addition, virtual machine ACLs can’t be used on Azure Service Management (ARM)-based virtual machines.

We will now continue working our way through our list of network capabilities that are available in Azure at the time this article was written (always keeping in mind that Azure is constantly changing and growing and adding new functionalities):

Site to site VPNs

√ Point to Site VPNs

√ Dedicated WAN links

√ Virtual network gateways

√ Azure Virtual Networks

Inter-virtual Network connectivity

External load balancers

Internal load balancers

Network Security Groups

Virtual machine ACLs

> Dual-homed

> Third party firewalls

  • Dedicated public IP addresses
  • Static IP addresses on virtual machines
  • Public addresses on virtual machines
  • DNS

In this article we’ll pick up the next two items on our list: dual (or in this cloudified age, more likely multi) NIC virtual machines and third party firewalls in the context of the Azure based hybrid network infrastructure. Because these two topics are interrelated, we’ll be talking about both together throughout this discussion, which will continue over into Part 11.

Multi-NIC virtual machines

One of the big problems that we had with Azure in the past was that you couldn’t really segment your Azure Virtual Networks in the way you’re used to doing. That doesn’t mean you weren’t able to divide them into subnets. Sure, you could create subnets within the address space that you assigned to your Azure Virtual Network, but once you had done that, you couldn’t really do much to control traffic between the subnets. This could be frustrating to admins who are used to having that sort of control, but most accepted it as “the way it is,” since going to the cloud pretty much always involves giving up some of your control.

Still, this was a definite sticking point for some IT pros and might even have influenced a few to recommend against migrating to Azure. Well, the good news is that things have changed quite a bit since then, and in the right direction; this transformation began when Microsoft started offering Network Security Groups.

If you’ve been working with Azure recently – or even if you haven’t but you read Part 8 of this article series – then you know that a Network Security Group is like a very basic stateful packet inspection firewall that uses 5-tuple based rules to control inbound and outbound traffic to and from subnets or specific virtual machines. These were definitely better than nothing, but also far from the ideal solution. After all, simple stateful packet inspection firewalls are so 1990s. We need more than that to protect our twenty-first century virtual networks and all those mission critical applications and resources that reside on them.

In addition to Network Security Groups, we also had proxy based web applications “firewalls.” I put the word “firewalls” in quotation marks because despite the name, they weren’t anything like our modern robust network firewalls. These so-called “firewalls” were just single-NIC devices that are actually proxy devices, rather than firewalls. Web proxy devices are more typically single NIC machines, and if you remember back in the way when Tom talked about “hork mode” ISA firewalls, you’ll know what I mean. Of course, proxies and firewalls do have some commonalities, and we know that in fact the ISA firewall was a direct descendent of Microsoft Proxy Server. However, we also know that one of the important differences between proxies and firewalls is that the former are easily bypassed because they don’t enforce network segmentation and isolation.

So at that point in the evolution of Azure, we were pretty much stuck with simple Network Security Groups and single-NIC proxies. But that was then and this is now. Things are quite a bit better now. Why? Because Azure now supports multi-NIC virtual machines. This new capability was first announced at TechEd Europe in 2014. Now you can have a VM with two or more NICs, one of which will be designated as the default NIC.

The ability to use multiple NICs is a critical capability for any network security device that you want to use to create strong logical security segmentation of your Azure Virtual Network. You also need to have support for multiple NICs for the third party network security appliances that you want to use as firewalls. That means the addition of support for multiple NICs was a huge step forward in providing for better control of traffic on Azure virtual networks.

That said, there are a number of things you need to be aware of when thinking about multi-NIC virtual machines. It’s easy to do this on your on-premises virtualization infrastructure, and we’ve been doing it for years without missing a beat. As you might already have discovered about many aspects of moving to the cloud, while we love the many benefits of embracing Azure and Azure Virtual Networks, it’s a different ball game in the cloud. And as always when you start playing a brand new game, you need to know the rules.

In the following sections, we’ll attempt to get you acquainted with some of the guiding principles and the limitations that are involved in using multiple NICs in Azure. Here are a couple of them to get you started, and we’ll pick up the rest in Part 11:

All VMs on the same cloud service or resource group need to have similar NIC setups

Regardless of the deployment model (ASM or ARM) all of the machines that reside in the same cloud service (ASM) or resource group (ARM) have to be either single-NIC or multi-NIC.

Note:
You might recall that ASM refers to Azure Service Management and ARM refers to Azure Resource Manager, which are two different REST APIs that can be used to enable access to Azure services and platform features, including Azure Virtual Machines and Azure Virtual Networks. Each has its advantages and disadvantages, but ARM is the newer model and has more or less replaced ASM. You might also hear/read ASM referred to as the “classic” deployment model. Microsoft recommends using ARM for most new deployments.

All multi-NIC VMs need to be on an Azure Virtual Network

Multi-NIC VMs have to be on Azure Virtual Networks (I don’t know why you would want to have a multi-NIC VM that isn’t on an Azure Virtual Machine and Virtual Networks, but I’m sure there are some people out there who would want to do this). You might have virtual machines that aren’t in VNets, but multiple NICs on non-VNet virtual machines are not supported.

VNets are easy to create in the Azure portal. When you create a VNet, you can add it to an existing resource group or you can create a new resource group in the process of creating the VNet. The advantage of VNets is that they are isolated from one another so that you can create separate VNets for different purposes. Because of this containment, a VNet can’t span more than one Azure region. Remember, though, that VMs can connect to one another within the same VNet even if they are in different subnets. That’s where the need for more control over traffic comes in. You can also connect VNets to one another if they have unique CIDR blocks.

Summary

In this, Part 10 of our series on creating and managing a hybrid network infrastructure in Microsoft Azure, we delved into the wonderful world of multiple NIC support that was added to Azure in 2014, and began discussing some guidelines and caveats and “gotchas” that apply to using this feature. In Part 11, we will continue and wrap up that discussion, so be sure to join us then.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top