Maximizing Your Virtual Machine Density in Hyper-V (Part 7)

If you would like to read the other parts in this article series please go to:

Throughout this article series, I have been talking about ways to improve the number of virtual machines that can be accommodated on a Hyper-V server. In this article, I want to conclude the discussion by talking about a new feature in Windows Server 2012 R2 Hyper-V.

The new feature that I am talking about is called Quality of Service Management (formerly known as storage QOS). This feature, which you can see in Figure A, enables you to specify a minimum order a maximum number of IOPS on a per virtual hard disk basis. I touched on this feature briefly in Part 3 of the series, but I want to revisit the feature in a little more depth.

Image
Figure A: The Quality of Service Management feature allows you to throttle IOPS consumption on a per virtual hard disk basis.

As you can see, this feature is relatively simple to use. If you are concerned about a virtual hard disk not receiving sufficient disk I/O then you can set a minimum IOPS level. On the other hand, if the virtual hard disk is accommodating a very I/O intensive application then you may want to populate the Maximum IOPS field as a way of limiting the total number of IOPS that the virtual hard disk can consume.

Obviously, striking a balance in a way that prevents any virtual hard disk from consuming excessive disk I/O can go a long way toward helping you to improve your overall virtual machine density. However, there is more to the story than that.

Although the Quality of Service Management feature is new to Windows Server 2012 R2, the concept of storage QoS is not unique to Microsoft. A number of different storage vendors are beginning to include storage QOS features in their products. It seems as if the storage industry is convinced that QoS based storage throttling is going to be the next big thing for getting a handle on storage IOPS.

While storage QOS may certainly have its place, it will likely yield the biggest benefit when combined with other storage performance features. Allow me to elaborate.

Without a doubt, one of the biggest problems that storage administrators face is the trade-off between performance and capacity. Suppose for a moment that an administrator needs to ensure that a high demand SQL database receives a sufficient amount of disk I/O. The conventional solution to this problem is to combine the required number of spindles to create a logical disk structure that meets the database I/O requirements.

But here is where the capacity problem rears its ugly head. Imagine for a moment that delivering a sufficient amount of disk I/O to the database required twenty disk spindles. Now suppose that each of these disks are a modest 1 TB in size. If you take all of the overhead out of the picture for the sake of simplicity, it means that 20 TB of disk space has been allocated to the database. So what happens if the database only needs 1 TB of disk space? The other 19 TB has been wasted just so that adequate performance can be delivered.

In a virtual data center, administrators sometimes try to solve this problem by creating cluster shared volumes that are specifically designed to accommodate certain virtual hard disks. Such a cluster shared volume might accommodate one very high demand virtual hard disk, such as the one described above. To keep all of the unused disk space from going to waste, the cluster shared volume might also accommodate virtual hard disks that require a large amount of physical disk space, but not a lot of disk I/O. Of course this approach also means that a few extra spindles need to be used so that the logical disk structure can deliver sufficient IOPS to accommodate the high demand and the low demand virtual hard disks.

Even though this approach works, it is far from being ideal. The problem is that there is a certain amount of administrative burden associated with identifying the best virtual hard disks to include on such a volume. Never mind the fact that storage requirements for virtual hard disks tend to change over time.

So with that in mind, we are back to the original storage paradox. How can you provide a virtual hard disk with the IOPS that it needs, without wasting an excessive amount of physical disk space?

If you approach the problem solely from a Windows perspective, then the best solution may be to combine two separate technologies that are found in Windows Server 2012 R2. The first of these technologies is obviously Quality of Service Management.

Even if you have a virtual hard disk containing a high demand application, the demand for disk I/O probably isn’t going to be uniform at all times. There are going to be peaks and dips in demand. That being the case, you can use Quality of Service Management to ensure that all of the virtual hard disks that share physical storage are guaranteed a certain minimal amount of disk I/O. That way, if the high demand virtual hard disk receives an extreme usage spike it won’t choke out the other disk I/O. Remember, when it comes to I/O requests, Windows normally process of the requests on a first-come first-served basis by way of a disk queue. The Quality of Service Management feature essentially reserves positions in the disk queue when minimum levels of disk I/O are mandated.

The other new Windows Server 2012 R2 feature that can come in handy in this type of situation is tiered storage. The basic idea behind this feature is that Windows Storage Spaces will allow you to create virtual hard disks that place commonly used disk blocks on a high speed storage tier consisting of SSD storage. However, the storage tier also contains a write back cache that is used to absorb spikes in write operations. This cache can help virtual machines continue to perform well, even under heavy I/O loads. By doing so, it may make it more practical to allow an ever increasing number of virtual machines to share a common storage pool.

Some have insisted that this two-pronged approach to I/O management is obsolete even though it is brand-new, because SSD storage is getting cheaper and will remove the need for storage throttling. Even though SSD’s are undeniably fast, the performance gains that SSD’s provide will only remain ahead of the curve for so long. Eventually workload demand will catch up to SSD performance and we will be right back where we started.

One last thing that I want to mention is that the problem of balancing disk I/O with disk capacity becomes even more important as organization’s transition to a private cloud environment. In a regular Hyper-V based virtual data center, administrators ultimately have control of virtual machine density and virtual machine performance. In a private cloud environment, this is not necessarily the case. Workloads become much more predictable because virtual machines are generated by end-users, not always by the administrator. As such, mechanisms like Quality of Service Management and write back caching on SSD storage will become essential for making sure that workload to remain in check.

Conclusion

As you can see, there are a great number of issues to consider when it comes to maximizing virtual machine density in Hyper-V. Keep in mind though, that it is not always wise to completely Max out a host server. You must leave room for future growth and for failover situations.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top