Server and desktop hypervisors (Part 2)

If you would like to be notified of when Scott Lowe releases the next part in this article series please sign up to our Real Time Article Update newsletter.

If you would like to read the first part in this article series please go to Server and desktop hypervisors (Part 1).

A new level of availability

In part 1, I discusses high availability – vMotion, Live Migration, DRS, Storage vMotion and others – and how these features are critical in a server-based hypervisor’s ability to reduce administrative overhead while, at the same time, providing organizations with powerful availability capabilities, too.

This whole concept can be taken to an even greater level, too. Remember back to high availability and compare that capability with DRS and PRO, which extend high availability to new levels. However, the primary limitation is with the features discussed up to this point is that everything is planned; a host is taken down for maintenance or rules are created governing the placement of virtual machines on appropriate hosts.

Unfortunately, the real world isn’t quite so neat.

At some point, a virtual host will simply fail and take with it all of its running workloads. While DRS and PRO are great when it comes to managing planned workloads, neither one is fully adept at simply having a rug pulled out from underneath them. That’s where even more powerful availability mechanisms must come into play. As is the case with DRS, not all editions of every product have this more advanced capability, so organizations need to choose editions based on whether or not these kinds of features are needed.

In VMware, this the job of the Fault Tolerance feature in vSphere, but Citrix XenServer also boasts this capability. At present, Hyper-V doesn’t have this highest of high availability features. vSphere goes so far as to, once a host fails and Fault Tolerance has to kick in, another virtual machine is automatically created to ensure continuous protection and availability. In VMware, the Advanced, Enterprise and Enterprise Plus editions carry this feature.


The virtualization promise includes lower cost along with all of the other technical goodies, such as high availability and more. Since virtualization has to support huge environments and organizations want to do it with as little hardware as possible, scalability is key. Besides just being able to support large overall environments, virtualized servers need to be able to access the same kinds of resources that they can on physical hardware. Whether this means being able to use 2, 4 or even 8 virtual processers, the hypervisor has to allow this level of individual workload scalability.

All hypervisor products carry different levels of scalability and you need to separately consider both host and individual virtual machine scalability limits. Your choice of hypervisor product may very well be informed by some of the limits that you find. See Table 1 below for some overall limits implemented in the (currently) newest versions of these enterprise hypervisor products.




vSphere 4.1

Hyper-V R2

XenServer 5.6

Max RAM/host

No limit

1 TB

256 GB

RAM overcommit
















Max vCPUs/VM





255 GB

64 GB

32 GB

Max disk/VM

2 TB

2 TB

2 TB

Table 1: Hypervisor limits

How can this information help you make a hypervisor choice? Well, suppose you’re running a high end SQL server and you’d like to move that to your virtual environment. Next, suppose that this SQL server requires 4 virtual CPUs and 48 GB of RAM. This high RAM need surpasses the virtual machine limits for Citrix XenServer 5.6, so that product would be eliminated as a contender. You’ll also notice that only VMware and Citrix support more than four virtual CPUs per virtual machine. If you need more than four vCPUs in a VM, Hyper-V isn’t an option.

Notice also the limits in the top half of the table. With the exception of the number of virtual machines that can run on a single host, VMware vSphere wins the day in a head-to-head comparison. Of course, these limits are often academic; it would be pretty tough to get to 384 virtual machines on a single Hyper-V host anyway. At some point, there would be resource – processor, RAM, disk – exhaustion well before this limit was hit.

There is one feature of notice that I want to delve into a bit deeper – RAM overcommit. Over the years, this has been a relatively controversial topic in some circles. RAM overcommit basically allows a hypervisor administrator to assign more RAM to virtual machines than is available on the host. At first glance, this might seem like a dangerous thing to do but when you peek beneath the hood a bit, you’ll find that it’s actually a pretty powerful feature that can help you to further scale your environment.

Before moving on, I want to highlight some of the memory management capabilities that are found in vSphere 4.1:

  • Transparent Page Sharing (TPS). TPS is a de-duplication feature used by vSphere to store a single copy of multiple, identical memory pages. This is one method by which vSphere is able to use more physical memory than is available in the server. The more that can be shared, the more RAM that remains available.
  • Memory Ballooning. Memory swapping to disk can be an intensive task. Therefore, vSphere implements a feature called ballooning as an intermediary step before taking that intensive action. Both ballooning and swapping only take place in low memory conditions. Ballooning takes advantage of a driver that is installed along with the installation of VMware Tools inside a virtual machine. When vSphere needs more RAM, it tells the balloon driver to inflate and it requests RAM from the guest virtual machine. This technique is most useful when you’ve allocated more RAM to a guest virtual machine than you need, but it can also be used in other conditions.
  • Memory Compression. Rather than just swap memory out to disk, memory compression allows memory pages to be compressed and still stored in memory. Decompressing a memory page is orders of magnitude faster than reclaiming the page from a swap file on disk, so this is another method by which vSphere memory management does everything possible to ensure that virtual machines run well.

I believe that advanced RAM management that allows for memory over commitment is a key feature in an enterprise-grade hypervisor. In general, RAM is the first resource to be exhausted on a virtual host. Processing power has increased by orders of magnitude over the years and, although today’s servers have more RAM than every before, growth has definitely advanced at a slower rate. RAM overcommit allows administrators to deploy more virtual machines than might otherwise be possible.

Networking and scalability

Even the most robust virtual server environment will fall over flat if the network it’s connected to is weak. As such, hypervisors provide a number of network elements designed to ease the integration of virtual servers into the wider infrastructure. What are some of the features than an enterprise-grade hypervisor should support in order to make this process seamless?

  • 802.1q VLAN tagging. From both a security and an operational perspective, not every virtual machine should be on the same network. Hence, the need to be able to tag networks in a very granular way. All of the major hypervisors support this feature.
  • 803.2ad link aggregation. As is the case with physical servers, the network connectivity options need to be able to scale to levels that would be supported by a physical environment. As such, hypervisors need to support 803.2ad-based link aggregation in order to be able to scale bandwidth beyond a single network uplink. Both VMware and Microsoft support link aggegregation.
  • IPv6. The world is out of IP addresses. At some point, a full transition to IPv6 is inevitable. The virtual infrastructure must support this protocol in a complete way.

VMware vSphere 4 also introduced what the company calls a vNetwork Distributed Switch (vDS). A vDS eases the management burden of per host, virtual switch configuration management by turning the network into a single, aggregated resource to be managed as a single unit. With the traditional network management method, each host would maintain its own virtual switch managed on a per host basis. By aggregating these disparate standard switches into a single distributed virtual switch, administrative overhead is eased and new capabilities as added, including a new feature called Network vMotion. Network VMotion takes networking to a new level by tracking individual virtual machine network state, such as individual port counters and statistics, as the VM migrates between hosts on a vDS. This kind of capability makes network troubleshooting and monitoring much easier.

The implementation of a vDS also enabled bidirectional traffic shaping on the network which allows you to limit traffic to and from virtual machines.

Robust management tools

No matter how many features are present in the hypervisor, the choice of management tools is an incredibly important one. Having the ability to easily monitor and manage the entire virtual environment quickly and easily should be a primary consideration.

  • VMware. VMware has vCenterFoundation and Standard editions from which to choose. Foundation can manage up to three vSphere hosts while the Standard edition allows you to manage larger environments and carries with it more powerful features. vCenter provides a ton of management features as well as some monitoring capability.
  • Microsoft. Microsoft’s Hyper-V tool can be managed with Virtual Machine Manager 2008 R2. Further, with Hyper-V and VMM 2008 R2, you can leverage System Center Operations Manager 2007 to provide comprehensive monitoring of your virtualization environment.

One thing is clear: There is not shortage of tools available for monitoring your virtualized environment. There are a great many third party tools available out there to help you keep your virtual environment in good working order.

Virtual machine operating system choice

This feature will make or break your hypervisor selection – what guest operating systems are fully supported by your hypervisor? If you’re primarily a Windows shop, pretty much any hypervisor will do the trick since Windows is pretty well supported. Of course, if you need support for really old versions of Windows, such as Windows NT, only vSphere supports those. As you look at non-Windows guest operating support – such as for NetWare, FreeBSD or Linux – hypervisor selection gets a bit more challenging. Hyper-V, as you might expect, does a good job with Windows, but, beyond that, you’re limited to SUSE Linux Enterprise Server or Red Hat. Other guest operating systems don’t enjoy full support.

In this race, vSphere is the clear winner follow by Citrix and then Microsoft.


Between parts 1 and 2 of this article series you received an overview of many features that should be considered critical in your hypervisor selection efforts. You also learned the reasons why these features are so important. Although the features presented are not exhaustive, they are significant. In part three of this series, we’ll take a similar approach and discuss desktop virtualization software.

If you would like to be notified of when Scott Lowe releases the next part in this article series please sign up to our Real Time Article Update newsletter.

If you would like to read the first part in this article series please go to Server and desktop hypervisors (Part 1).

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top