Storage planning for Hyper-V Hosts (Part 4)

If you would like to read the other parts in this article series please go to:


Have you ever had multiple virtual machines running on a Hyper-V host and found that one of your virtual machines was hogging all the input/output operations (IOPS)? The result is often degraded performance for some of your business-critical virtualized applications and server workloads. In an ideal world where storage is cheap, you could add more storage until you have enough to satisfy the hungry needs of all your server applications. But this world is the real world unfortunately and adding more storage can not only be expensive but also counterproductive since one insatiable virtual machine can devour any additional storage you add to your hosts, starving the other virtual machines of the necessary IOPS they need to do their job.

Another scenario where this kind of thing arises is in cloud hosting environments. Let’s say you’re a cloud hosting provider and you have two tenants 1 and 2 who are utilizing your cloud for hosting their virtualized applications. Tenant 1 however is a “noisy neighbor” that consumes so many IOPS that the performance of tenant 2 is seriously impacted. For the hoster this is obviously an undesirable situation as they want all their customers to be happy lest they start losing some of them and the income they bring.

Storage QoS

The solution to preventing one virtual machine from hogging all the IOPS available on your Hyper-V host is Storage Quality of Service (QoS), a new feature of file-based storage introduced in Windows Server 2012 R2. Storage QoS is enabled at the VHDX layer and allows you to limit the maximum IOPS allowed to a virtual disk on a Hyper-V host. Storage QoS can also allow you to set triggers to send notifications when a specified minimum IOPS is not met for a virtual disk. Possible usage scenarios for this feature include:

  • Configuring different service-level agreements (SLAs) for different types of storage operations within your infrastructure. For example, a hoster can use this feature to configure Bronze, Silver, and Gold SLAs for storage performance available for different classes of tenants. You can even set alerts that trigger when virtual machines are not getting enough IOPS for storage access.
  • Restricting the disk throughput for overactive or disruptive virtual machines within your environment that are saturating the storage array. Hosting providers may appreciate this capability since it means they don’t have to worry about one tenant consuming excessive storage fabric resources at the expense of other tenants.

As the screenshot below shows, Storage QoS can even be configured while the virtual machine is running. This allows organizations to have a lot of flexibility in how they manage access to the storage fabric from workloads running in their cloud environments:

Figure 1: You can configure Storage QoS for a virtual machine using Hyper-V Manager.

The above figure shows the Settings dialog for a running virtual machine in Hyper-V Manager. Advanced Features is selected for a hard drive attached to the SCSI Controller, and the checkbox has been selected to enable Storage QoS for the virtual machine.

You can also configure Storage QoS maximum and minimum IOPS settings using Windows PowerShell using the Set-VMHardDiskDrive cmdlet as follows:

Set-VMHardDiskDrive -MaximumIOPS <integer value>

Set-VMHardDiskDrive -MinimumIOPS <integer value>

For both commands Hyper-V calculates normalized IOPS as the total size of I/O per second divided by 8 KB. See the Windows PowerShell cmdlet reference for more information.

For a quick high-level overview of Storage QoS for Hyper-V, see the feature overview on TechNet. For the rest of this article however we’ll examine some considerations regarding its use for virtual machines running on Hyper-V hosts.

Storage QoS considerations

Both Microsoft Hyper-V and VMware vSphere now include some form of quality of service for regulating IOPS used by virtual machines. The Microsoft feature is called Storage QoS and was introduced in Windows Server 2012 R2. The VMware feature is called Storage IO Control (SIOC) and is included in VMware vSphere 5.5. Both technologies have a similar limitation in terms of the kind of storage devices to which they can be applied. Specifically, Storage QoS is not supported for pass-through disks and SIOC is not supported for RDM disks. This is documented in Keith Mayer’s helpful comparison of Microsoft and VMware technologies.

To enable Storage QoS and configure maximum and minimum IOPS values you use the Hyper-V Manager console as shown in the figure above. But when you have lots of Hyper-V hosts with tons of virtual machines running on them, you’ll likely be using System Center Virtual Machine Manager (VMM) 2012 R2 to manage your Windows Server 2012 R2 Hyper-V environment. Unfortunately you cannot use the current release VMM 2012 R2 to enable or configure Storage QoS on Hyper-V hosts. So batch configuration and management of Storage QoS across a Hyper-V host farm using VMM is currently not possible, but we can probably expect this to change in a future version of VMM and cloud hosting providers will especially be looking for such an improvement so they can solve the “noisy neighbor” problem described earlier.

Finally, Storage QoS as it is currently implemented in Windows Server 2012 R2 Hyper-V is not instantaneous in its response time. This may mean for example that if your virtualized workload suddenly requests a huge amount of IOPS, it may take a few seconds before Hyper-V throttles down the virtual machine’s IOPS to the maximum level configured for Storage QoS for the host in Hyper-V Manager. For an actual look at Storage QoS in action and to see this effect happening, see the following two articles by Microsoft MVP Aidan Finn:

And for some more good articles on Storage QoS, see the following articles titled by Didier Van Hoye on his blog Working Hard In IT:

If you would like to read the other parts in this article series please go to:

Leave a Comment

Your email address will not be published.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top