Windows Server 2012 Network Virtualization and Infrastructure as a Service (IaaS) in an On-Premises and Hosted Private Cloud Infrastructure (Part 1)

If you would like to read the next part of this article series please go to Windows Server 2012 Network Virtualization and Infrastructure as a Service (IaaS) in an On-Premises and Hosted Private Cloud Infrastructure (Part 2).


Infrastructure as a Service (IaaS) is one of the three common cloud service models. Most companies who are thinking about entering into the private cloud space are going to think about deploying Infrastructure as a Service first, as this is the foundation of the other cloud service models, which are Platform as a Service (PaaS) and Software as a Service (SaaS). I agree; if you can’t get Infrastructure as a Service right, any attempt at getting Platform as a Service or Software as a Service right in the private cloud is going to fail.

The core goal of IaaS is to provide something akin to “virtual machines for rent”. When you deploy IaaS, business groups in your organization can request virtual machines and the core compute, network and storage they require to run their own platforms and services. To the consumers of the cloud service, the infrastructure will be transparent. They won’t know about what hosts, the location of storage (or even the type of storage) and the networking configuration that are provided by the private cloud. On the other hand, they will still be responsible for the development platform and the finished software services that will run on that infrastructure.

Note that a key element of the concept of “virtual machines for rent” is that sooner or later it’s expected that each of them will release some or all of the infrastructure they are using back into the shared pool of resources that comprise the private cloud. This means they will be taking advantage of the private cloud principle of elasticity, whereby they can acquire the resources they need for the time when they need them and then they can release those resources back into the shared pool when they are finished with them.

Private cloud is not necessarily on-premises

Another thing to keep in mind when it comes to IaaS in a private cloud is that, while by definition private cloud means that all the infrastructure is under the command and control of a single organization, the infrastructure does not have to be on the organization’s physical premises. This is an important point to emphasize when you’re talking about or considering deploying private cloud computing. Many times I have talked to people about cloud computing and they have told me that a private cloud is “cloud on-premises” and public cloud is “cloud off-premises.” That is not true, or at least is not the whole truth. If you wanted to, you could run a public cloud on premises; there is nothing to prevent you from hosting a cloud infrastructure on-premises and then opening it up to multiple organizations. Of course, it’s not very likely that you’re going to do that unless you’re thinking about going into the public cloud service provider business.

Likewise, a private cloud can be hosted on-premises and with a hoster. This enables you to use your on-premises private cloud for development and then move the services that are designed and created on your on-premises private cloud to your private cloud infrastructure on an off-site hoster network. In addition, you could use your private cloud infrastructure that is located at the hoster’s facility as a disaster recovery site, so that all of your on-premises cloud services can keep running at the disaster recovery site in the event of a disaster that takes down the primary site.

Traffic isolation and VLAN issues

This is where we run into a problem with networking. While there are a lot of advantages to using this mixed on-premises and off-premises model for private cloud deployments, there are some difficult networking issues that we will need to hammer out. From a security perspective, one of the greatest challenges for the cloud service provider is to enable isolation at all levels, including the compute, storage and networking divisions of the cloud infrastructure. At the networking layer, cloud service providers (and on-premises network providers) have traditionally depended on VLANs and 802.1q VLAN tagging to isolate the different customer networks from one another. While VLANs have a lot to recommend them, there are some difficult problems we encounter with VLANs when thinking about private cloud infrastructures of massive scale:

  • There is a hard coded number of VLANs available to any network. The VLAN ID is the identification number of the VLAN, which is used by the standard 802.1Q. It has 12 bits and allows the identification of 4096 (2^12) VLANs. Of the 4096 possible VIDs, a VID of 0 is used to identify priority frames and value 4095 (FFF) is reserved, so the maximum possible VLAN configurations are 4,094. That’s not a very big number when we’re talking about hosted private cloud infrastructures of massive scale.
  • While the theoretical number of VLANs is over 4000, it’s very hard to find switches that are less than carrier grade that support more than 1000 VLANs, which makes it even more challenging for the private cloud hoster to provide the level of scale that’s going to be required for you to isolate traffic for each of the consumers of the cloud service.
  • VLANs are tied to specific subnets – they aren’t very flexible in this respect.
  • There is a lot of management overhead with VLANs. If you add or remove a VLAN, you have to make sure that it’s configured correctly throughout the networking infrastructure. This is one of the most common reasons for massive network outages that we’ve seen in the last few years in the public cloud provider space, and it has traditionally been a vexing problem in the traditional IT datacenter management arena.

IP addressing issues

As you can see, the issue of traffic isolation introduces the first very real challenge to the cloud service provider who wishes to host a private cloud infrastructure for consumers of the cloud service. That is the first problem we need to solve. The second problem is even more problematic: how to deal with each customer’s IP addressing scheme.

When organizations put together solutions in their on-premises private clouds, they will assign IP addressing information to each component, to each tier, or the solution – typically based on whatever conventions they have already set in motion on their networks. These conventions are not very flexible, since the on-premises private cloud environment must work together with the on-premises network addressing scheme so that the developers can access the private cloud services and other services provided on the corporate network.

However, there will come a time when the on-premises development environment is going to need to go into production, or there’ll be a time when you are going to need to use the secondary site (on your private cloud infrastructure at the hoster site) because a disaster takes place on your own network. When these times come, you don’t want to have to move the virtual machines to the hoster private cloud infrastructure and then go back and reconfigure all the IP addresses on all the machines that are now being hosted on the private cloud hoster network.

Why would you have to reconfigure the IP addresses? Because there’s a very high chance that other consumers of the private cloud at the hoster network will be using the same IP addressing scheme that you are! Most organizations are using private IP address schemes, and therefore it’s inevitable that your IP addressing scheme is going to collide with someone else’s IP addressing scheme. And that is where the fun begins, because then you are going need to have to work with your cloud service provider to find out which addresses are available to you and then you’ll have to figure out how to make the appropriate address changes to the virtual machines in your hosted private cloud.

While changing IP addresses is relatively easy, there are several complicated issues that can complicate things when IP addresses are changed:

  • In many cases, organizations tie the IP addresses in with a particular geographical location.
  • Many network management tools are “hard coded” or targeted based on specific IP addresses and this “homing” based on IP address will need to be reconfigured if you change the IP addresses of the services after they move to the hosted private cloud.
  • Security management tools are also very IP-address-aware and are also targeted to specific IP addresses. If they see addresses changing and have no knowledge of the reason for this, they will fire off an alert and perhaps respond inappropriately.
  • There are also routing issues that you’ll need to consider, and these can be very complicated, especially if there is some collision between the addresses the cloud service provider wants you to use and those you already are using on your on-premises private cloud network.

The issues of isolation and IP address changes are big ones that could potentially make hosting private clouds impossible – or at least ruin the vision of private cloud computing where workloads should be entirely mobile and services running on their virtual machines should be completely decoupled from any of the cloud infrastructure. If we cannot ensure this decoupling, then the attraction and the value of cloud computing will be severely diminished and arguably completely lost.

So what is the solution (at least from a networking viewpoint)? Network virtualization. If we can virtualize the network in the same way that we do for the compute component (by using server virtualization technologies), then we can move our virtual machines from the on-premises environment to the hoster’s private cloud environment without needing to change the IP addresses on the virtual machines. This might sound like magic, but this is what the new Windows Server 2012 Network Virtualization feature is all about! And that’s what we’ll get into in Part 2.


In this article, we discussed two key issues that could make it difficult to run a hosted private cloud network of massive scale. The first issue is related to the security issue of isolation. We need to make sure that the traffic from each organization in the private cloud is separate and inaccessible to other organizations. The second issue is IP addressing – we need to make sure that the services and servers that are used on the on-premises private cloud infrastructure will be able to retain their original IP addresses when they’re moved to the hoster public cloud environment. The solution to both of these problems is the Windows Server 2012 Network Virtualization feature. In the second article of this series, we’ll discuss the Windows Server 2012 Network Virtualization feature and explain how it works and how it solves these two critical problems in private cloud networking. See you then! –Deb.

If you would like to read the next part of this article series please go to Windows Server 2012 Network Virtualization and Infrastructure as a Service (IaaS) in an On-Premises and Hosted Private Cloud Infrastructure (Part 2).

Leave a Comment

Your email address will not be published.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top