Maximizing Your Virtual Machine Density in Hyper-V (Part 3)

If you would like to read the other parts in this article series please go to:

Introduction

In the previous article in this series, I explained that the key to achieving the highest possible virtual machine density in Hyper-V was to monitor and carefully allocate hardware resources to VMs. In this article, I want to discuss some methods for allocating hardware resources to Hyper-V virtual machines.

Storage Resources

The first resources that I want to talk about are storage resources. When it comes to providing storage resources to virtual machines, you have to consider both capacity and disk I/O. Often times however, there is a tradeoff between achieving the desired I/O performance and the desired storage capacity. Thankfully, this tradeoff may soon be going away. I will explain why that’s the case a bit later on.

Capacity

If your goal is to maximize your virtual machine density then one of the things that you will have to do is to ensure that you have sufficient storage capacity for all of your virtual machines. One of the easiest ways of doing so is through thin provisioning.

When you create a new virtual hard disk for Hyper-V, you are given the choice between creating a fixed size virtual hard disk and a dynamically expanding virtual hard disk, as shown in Figure A. The difference between the two options is that the fixed size option claims the specified amount of physical storage right away, whereas the Dynamically Expanding option only consumes space on an as needed basis.

Image
Figure A: You can either create a fixed size or a dynamically expanding virtual hard disk.

Dynamically expanding virtual hard disks are considered to be thinly provisioned. To give you a more concrete example, take a look at the VHDX file shown in Figure B. This virtual hard disk is only consuming about 4 MB of physical disk space even though Hyper-V treats it as a 127 GB virtual hard disk. Of course the virtual hard disk’s file size will increase as data is added to it. In contrast, if I had created a 127 GB fixed size virtual hard disk, it would immediately consume 127 GB of disk space.

Image
Figure B: This 127 GB virtual hard disk is only consuming about 4 MB of physical storage space.

Obviously thin provisioning is a great way to get the highest possible VM density from your physical storage space. In fact, you might be wondering why you would ever create a fixed length virtual hard disk. There are actually at least two reasons for doing so.

The first reason is performance. Fixed size virtual hard disks perform better than dynamically expanding virtual hard disks because fragmentation tends to be less of an issue.

The other reason has to do with a theme that I will be discussing a lot throughout this article series – resource over commitment. Resource over commitment refers to provisioning virtual machines with more physical resources than are actually available. For example, the volume on which I created the virtual hard disk in the previous figure is roughly about 3 TB in size. Being that a dynamically expanding virtual hard disk starts out consuming about 4 MB of storage space, I could easily create ten 2 TB virtual hard disks. In doing so, I would initially only consume about 40 MB of physical storage space. However, if I started adding lots of data to those virtual hard disks they would run out of space at some point. The volume only has about 3 TB of storage, which would obviously be insufficient for accomodating 20 TB of data.

When you create thinly provisioned disks, Hyper-V does not question whether or not the underlying storage actually exists. The nice part of this is that it allows you to create virtual hard disks that are a lot larger than what you actually need, so that you don’t have to worry about trying to expand a virtual hard disk later on. The down side however, is that unless you carefully monitor your physical storage resources it is easy to accidentally run out of disk space.

Storage Performance

Virtual machine performance is directly affected by the number of IOPS that the storage sub system can deliver. In many cases, it is the storage sub system’s IOPS capacity that limits the overall virtual machine density.

One of the problems that has long plagued Hyper-V is that of virtual machines competing for IOPS. If multiple virtual machines use the same physical storage device then those virtual machines compete for storage IOPS. This means that if a virtual machine generates a large IOPS load then it could potentially impact the performance of other virtual machines with which it shares physical storage unless the underlying storage can deliver sufficient performance to keep pace with the full IOPS demand.

For right now the only way to avoid this issue is to place high demand VMs onto dedicated storage so that IOPS loads are isolated from other VMs. However, this is where the tradeoff between performance and capacity comes into play.

Imagine for instance that a particular VM needs 8000 IOPS and that each of your drives can deliver 1000 IOPS. You could provide the required IOPS to the VM by building a stripe set out of 8 drives. The problem is that when you dedicate physical disks in this way, you dedicate the disk’s full capacity. If each of those 8 drives is 3 TB in size then you have allocated 24 TB of physical storage to the VM (which may be way more than it needs) just to achieve the necessary performance.

Thankfully, Windows Server 2012 R2 Hyper-V is going to provide a way of getting around this problem. Rather than requiring you to isolate VMs in order to achieve a specific level of performance, you will be able to use a new feature called Storage QoS.

QoS is the name of a protocol that is used to reserve network bandwidth. Microsoft has borrowed the idea (and the QoS name) and created a system for reserving IOPS. In Windows Server 2012 R2, it will be possible to reserve a specific volume of IOPS for a virtual machine. Similarly, it will also be possible to limit high demand virtual machines so that they do not consume an excessive amount of IOPS.

As you can see in Figure C, Storage QoS works on a per virtual hard disk basis. This means that it will be possible to set different limits and reservations even within a single virtual machine.

Image
Figure C: Storage QoS allows you to reserve or limit IOPS on a per virtual hard disk basis.

Although Windows Server 2012 R2 has not yet been released, the Storage QoS feature should make it possible for virtualization admins to guarantee storage related performance for virtual machines, but without wasting physical storage capacity in the process, as has been so often done in the past.

Conclusion

In this article, I have discussed some considerations for physical storage with regard to achieving the highest possible virtual machine density. In the next article in this series, I will discuss the allocation of other physical resources.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top