Considerations for Distributed Applications in Virtual Environments (Part 1)

If you would like to read the next part in this article series please go to Considerations for Distributed Applications in Virtual Environments (Part 2).

I recently heard someone at Microsoft talking about some of the reasons behind the way that the Exchange Server 2013 architecture was designed. As you may know, Exchange Server 2007 and Exchange Server 2010 each supported five different server roles. When Microsoft created Exchange Server 2013 however, they abandoned most of these roles and went back to using an architecture that is more similar to that found in Exchange Server 2003. In fact, Exchange Server 2013 only uses two different server roles – the mailbox server role and the client access server role.

Obviously there were a lot of different reasons for making such a radical design change. However, one of the reasons really caught my attention. It was stated that one of the reasons why there had been such a high degree of separation in previous versions of Exchange Server was because some of the server hardware of the time was inadequate for handling all of the Exchange Server roles at the same time. Separating various functions out into separate roles made it possible to deploy Exchange Server on more modest hardware. Today however, hardware is much more powerful than it used to be, so hardware limitations were allegedly much less of a factor in designing Exchange Server 2013.

While this particular discussion was fascinating in its own right, it got me thinking about the larger subject of running distributed applications on virtual machines. Is it better to distribute applications in a way that places application roles on separate virtual machines, or that the roles be safely combined onto single virtual machines, or does it even matter?

On the surface this seems like a moot point. Take Exchange Server 2010 for example. Suppose that you separate out the various server roles, and you have a virtual machine that is acting as a client access server, another virtual machine that is acting as a hub transport server, and yet another virtual machine that is acting as a mailbox server. Assuming that all of these virtual machines are running on the same host, there is really no advantage from a hardware standpoint to keeping the server roles separate. If anything, Exchange Server is consuming slightly more hardware resources by operating in a distributed manner than it would if all of the roles were combined onto a single virtual machine. The reason for that is that each virtual machine has its own operating system, and there is a certain degree of resource consumption associated with the operating system.

When you examine the question of whether it is better to distribute an application across multiple virtual machines or consolidate an application to a single virtual machine purely in this way, then it seems as though consolidation would be the best option. However, things are not quite as cut and dry as they might at first seem. Like so many other things in IT, application distribution or consolidation is all about trade-offs. There are advantages and disadvantages to each approach, and you must use the approach that makes the most sense for your own environment. My goal in writing this article series is to point out some of the issues that you must consider as you determine how an application should be deployed.

Licensing Costs

Like it or not, cost is often one of the main factors in determining the outcome of IT related decisions. When it comes to the manner in which applications are deployed, there can be huge differences in cost depending upon the type of deployment that you choose to perform. To show you what I mean, let’s go back to my earlier example of Exchange Server. The same concept holds true for pretty much any distributed enterprise class application, but Exchange Server makes it a particularly good example.

Imagine for a moment that you decided to deploy Exchange server 2010 and you wanted to separate out the mailbox server role, the hub transport role, the client access server role, and the edge transport server role onto separate virtual machines. This means that you would be creating four separate virtual machines. In the real world, there might even be additional virtual machines so as to accommodate database availability groups and other fault tolerant solutions, but we”ll leave fault tolerance out of the picture for now as a way of keeping things simple.

Typically licensing is going to work the same way on virtual hardware as it does on physical hardware. If you are planning on deploying four instances of Exchange Server then you are going to need for Exchange Server licenses. You are also going to need four Windows Server licenses as well. If you happen to be installing the virtual machines on top of a Hyper-V server that has a data center edition license, then the operating system license requirements go away. In any case however, you are required to purchase the required number of Exchange Server licenses. On the other hand, if you were to simply consolidate all of the Exchange Server roles onto a single virtual machine then you would only need one operating system license and one Exchange Server license. The client access license requirements remain the same in either situation.

Another reason why you might choose to consolidate a distributed application into a single virtual machine is because doing so makes it easier to the application. Unlike a physical environment, virtual machines tend to be anything that stationary. Whether you’re using Microsoft’s Hyper-V or VMware, there is always the possibility that a virtual machine will live migrate to a different host server. Live migration isn’t a big deal in and of itself, but some organizations require a bit more control over the live migration process. For example, some organizations have a requirement for virtual machine affinity. In other words, all of the virtual machines that are associated with a particular application must always reside on the same host server.

Similarly, some applications are distributed as a way of providing fault tolerance. In a virtual environment, this often means building guest clusters. The effectiveness of a guest cluster can be completely undermined if all of the virtual machines that make up the guest cluster reside on a common host. Think about it for a moment. If that host were to fail, and something happens to keep the virtual machines from failing over to another host (which does sometimes happen) then the guest cluster would also fail. This completely mitigates all of the benefits of building the guest cluster in the first place.

In situations in which a distributed application is consolidated into a single virtual machine, the whole need for monitoring application affinity goes away. You don’t have to worry about making sure that virtual machines always remain on the same host together, or that they never reside on a common host. In this regard, application consolidation does simplify management.

Conclusion

This article may make it sound as though there is never a time when it is a good idea to deploy an application in a distributed manner in a virtual data center. However, nothing could be further from the truth. I would go so far as to say that in large environments, it is usually best to distribute applications whenever possible. There are a number of different benefits to distributing applications as opposed to consolidating them. I will talk about the benefits of application distribution in the second part of this article series.

If you would like to read the next part in this article series please go to Considerations for Distributed Applications in Virtual Environments (Part 2).

Leave a Comment

Your email address will not be published.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top