Determining Guest OS Placement (Part 2)

If you missed the first part in this article series please read Determining Guest OS Placement (Part 1).

In the previous article in this series, I began discussing some of the various techniques used for matching virtual servers to physical hardware. Although the first article in this series does a fairly good job of covering the basics, there are still a couple of other issues that you may have to consider. In this article, I want to conclude the series by giving you a couple more things to think about.

Step Three: Establish Performance Thresholds

The first thing that I want to give you to think about is individual virtual machine performance. I have already talked about resource allocation in the previous article, but in a way performance is a completely separate issue.

One of the main reasons for this is that in a virtualized environment, all of the guest operating systems share a common set of physical resources. In some cases it is possible to reserve specific resources for a particular virtual machine. For example, you can structure memory configuration in a way that guarantees that each virtual machine will receive a specific amount of physical memory. Likewise, you can use processor affinity settings to control the number of cores that each virtual machine has access to. While these are all good steps to take, they do not actually guarantee that a guest operating system will perform in the way that you might expect.

The reason for this is that sometimes there is an overlapping need for shared resources. In some cases, this can actually work in your favor, but, in other cases overlapping resource requirements can be detrimental to a guest operating system’s performance.

The reason why I say this is that Microsoft usually recommends a one-to-one mapping of virtual processors to processor cores. Having said that though, it is possible to map multiple virtual processors to a single processor core. With that in mind, imagine what would happen if you tried to run six virtual machines on four physical CPU cores.

What would happen in this situation really just depends on how those virtual machines are being used, and how much CPU time they consume. For instance, if each virtual machine was only using about 25% of the total processing capacity of a physical core then performance would probably not even be an issue (at least not for my CPU standpoint).

The problem is that most of the time the load that a virtual machine places on a CPU does not remain constant. If you have ever done any performance monitoring on a non-virtualized Windows server, then you know that even when a machine is running at idle, there are fluctuations in CPU utilization. Occasionally the CPU will spike to 100% utilization, but it also occasionally dips to 0% utilization.

And you will recall, earlier I said that sometimes shared resources can be beneficial to a virtual server, but sometimes they can be detrimental to it. The reason why I say this is that in situations in which the other virtual machines are underutilizing shared resources, a virtual machine may be able to borrow some of those resources from the other virtual machines to help it to perform better. Of course this capability varies depending upon how the virtual servers are configured, and which resources are needed. At the same time, if multiple virtual machines try to consume an abnormally large amount of resources at the same time, it can result in a situation in which the physical hardware cannot keep up with the demand and performance suffers until the demand for resources goes back to normal.

With this example in mind, the question that you have to ask yourself is whether or not it is acceptable for multiple virtual machines to lay claim to the same set of physical resources at the same time. Of course the only way that you can answer this question is to do some performance benchmarking and find out what level of resource consumption is normal for each virtual machine.

Step Four: Perform a Juggling Act

The final step in the process is to perform a juggling act. In some ways, this is not so much a step as it is a byproduct of working in the corporate world. The reason why I say that the last step is to perform a juggling act is that oftentimes you may find that what works best from an IT perspective does not mesh with the company’s business requirements. In these types of situations, you will have to find a balance between functionality and corporate mandates. Often this boils down to security concerns.

For example, one of the biggest security fears in regard to virtualization is something called an escape attack. The basic idea behind an escape attack is that an attacker is able to somehow escape from the constraints of the guest operating system, and then gain access to the host operating system. Once an attacker is able to do that, they could theoretically take control over every other guest operating system that is running on the host server.

To the best of my knowledge, nobody has ever successfully performed an escape attack in a Hyper-V environment. Even so, many organizations are still jumpy when it comes to the possibility. After all, zero day exploits do occur from time to time, and Hyper-V has not really been around long enough to warrant total confidence in its security.

Do not get me wrong. I am not saying that Hyper-V is insecure. I am just saying that like any other new product, there may be security holes that have yet to be discovered.

Given the possibility that someone might eventually figure out how to perform an escape attack against Hyper-V, some organizations have begun to mandate that only virtual machines that are specifically designed to act as front end servers can be placed on certain virtual machines. Front end servers typically reside at the network perimeter, and are therefore the most prone to attack. By their very nature, front end servers are designed to shield the backend servers from attack.

Grouping all of the front end servers together on a common host machine ensures that if someone ever does perform an escape attack, they would not gain access to anything other than a bunch of front end servers. Since front end servers do not typically contain any data, this approach would help to protect backend servers from being compromised through an escape attack.

So what is wrong with this approach? Nothing, from a security standpoint. From a utilization standpoint though, this approach may present a colossal waste of server resources. In smaller organizations, front end servers tend to consume very few hardware resources. If your goal was to get the most bang for your hardware buck, you would want to pair low utilization virtual servers with high utilization virtual servers. That way, the two balance each other out, and you can evenly distribute the virtual machine workload across your hardware. In this particular case, the organization’s security requirements take precedence over making the most effective use of the organization’s hardware.

Conclusion

In this article series, I have explained that while a single physical server is usually capable of hosting multiple virtual servers, it is important to group virtual servers together in a way that makes the most efficient use of hardware resources without over taxing the hardware in the process. I then went on to explain that sometimes an organization’s business needs or their security policy may prevent you from making the absolute best possible use of your server hardware.

If you missed the first part in this article series please read Determining Guest OS Placement (Part 1).

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top