Storage planning for Hyper-V Hosts (Part 6)

If you would like to be notified when Mitch Tulloch releases the next part of this article series please sign up to the WindowsNetworking.com Real time article update newsletter.

If you would like to read the other parts in this article series please go to:

Introduction

A key consideration when planning storage for new Hyper-V hosts (or new storage for existing hosts) is the issue of security. The fact of the matter is that all software has holes in it because software is based on trust boundaries and there has to be some mechanism for crossing those boundaries. Subverting such a mechanism exposes the host to possible malware infection or exploits that can enable an attacker to take control of the host and from there launch an attack on your whole network and bring your business to its knees.

It’s important therefore to always follow best practices for ensuring the security of your Hyper-V hosts and the VMs that run on them–the very VMs on which your business depends. Let’s therefore consider some of the steps, both general and storage-related, that you should always follow when planning storage for Hyper-V hosts in particular and planning host deployment in general. It’s also important to follow best practices for maintaining host systems to ensure that planned or unplanned maintenance doesn’t cause interruption in the continuity of your business. Finally, it’s important before implementing new storage that it will perform as expected to support the needs of your VM workloads.

Trusted sources

Device drivers are trusted software that can basically do anything once they are installed on a host. This includes not just storage drivers but also network drivers and any other drivers on the host. Any driver on the host can potentially intercept data in the VM’s data path and basically do anything it wants to with the VM. This is a general consideration that applies to every version of Hyper-V and also to other hypervisors like those from VMware, Xen, KVM and so on.

So the bottom line is, make sure you have a trusted source for the storage drivers (and any other drivers) you’ll be installing on your hosts. In fact, best practice with Hyper-V is to not install any other software (including other server roles) on your hosts. You should also plan on deploying your Hyper-V hosts in minimal configurations, such as a Server Core installation or a Minimal Server Interface installation, not a Full installation.

Updating device drivers

Updating device drivers on production servers can be a tricky thing. How many of us sysadmins have held our breath while doing so and then breathed a long sigh of relief when everything still worked fine afterwards? The last thing you want to do is break a production server by updating a driver on it.

This is also true of drivers for storage devices. I’ve heard about instances where an admin updated a driver for a storage device used by VMs running on a Hyper-V host and the VMs couldn’t be restarted again afterwards. You need to plan to thoroughly test new device drivers on lab hosts before rolling them out to your production hosts.

Patching hosts

The same advice about testing in a lab apply to patching production hosts. For example, I know of one story where the admin patched a production host running Windows Server 2012 that had Fibre Channel host bus adapters (HBAs) for connectivity with a storage area network (SAN). Mysteriously, when the patches were applied the HBAs stopped working were displayed as not supported in Hyper-V Manager. In other words, patching the host operating system caused SAN connectivity to fail which brought down all the VMs running on the host. Fortunately, removing and then reinstalling the HBA drivers restored SAN connectivity, but the moral here is clear: plan to test any patches for hosts in your lab before you patch the hosts in your production environment.

iSCSI storage for hosts

If your Hyper-V host uses iSCSI disks (i.e. remote block-based storage) then during the planning stage of your deployment you need to ensure that the network configuration of your host will provide sufficient bandwidth for each iSCSI initiator. If you decide that a single physical network interface card (NIC) can provide sufficient bandwidth for your host’s storage traffic, then go with that as it’s the simplest approach. If however you need additional network bandwidth for storage traffic, you can use a pair of virtual NICs (vNICs) that have multipath I/O (MPIO) enabled layered on top of your physical NIC team. The advantage of using MPIO in such a scenario is that it enables multiplexing of the host’s storage traffic into multiple TCP streams so the teaming layer can load-balance the traffic across multiple physical NICs.

Pass-through disks

When planning storage for Hyper-V hosts running Windows Server 2012 or Windows Server 2012 R2, you should avoid using pass-through disks for any VM storage purposes. That’s because of four reasons. First, the performance of fixed VHDs (and especially VHDXs) in Windows Server 2012/2012R2 is now as good as the performance of pass-through disks in previous versions of Windows Server. Second, a number of useful new features of Windows Server 2012/R2 won’t work with pass-through disks including Hyper-V Replica and VSS backup on the host operating system. In addition, if a VM is using pass-through disks then its mobility is limited to being moved around on the same cluster when shared storage is used. And you can’t easily resize a pass-through disk the way you can a virtual disk. Third, pass-through disks add additional management complexity since you will be dealing directly with SAN LUNs instead of through the abstraction layer of virtual disks. And fourth, using pass-through disks means wasting storage capacity since a pass-through disk is exclusively owned by a single VM.

Backups

There’s basically no limit to the number of disks a Hyper-V cluster can be configured to use, and this includes both CSV and non-CSV disks. This allows much flexibility in designing the storage architecture of Hyper-V host clusters.

In general, you should consider using more smaller CSVs instead of fewer larger ones for your Hyper-V host cluster. The rationale behind this recommendation is that if the disk space on a small CSV fills up, only a few VMs (the few that use the CSV) will be impacted instead of many.

An even bigger consideration however is backups. It’s generally faster to back up your host cluster if you have lots of smaller CSVs spread across the nodes of your cluster. That’s because VSS backups can be parallelized. A general recommendation for planning CSV storage for a Hyper-V host cluster is to have one CSV per node of your cluster.

Finally, if you need to back up your Hyper-V host cluster then be aware that the Windows Server Backup feature included in Windows Server 2012/2012R2 has only limited support for CSVs (for more information see this link). Most importantly, Windows Server Backup only supports file-based backup of CSV volumes; it doesn’t support VM-based backups. So instead of Windows Server Backup, you should really use System Center Data Protection Manager or an equivalent third-party Hyper-V backup product for backing up your host clusters. However, if your Hyper-V hosts are non-clustered, Windows Server Backup will allow you to select a VM when you configure your backup schedule.

If you would like to be notified when Mitch Tulloch releases the next part of this article series please sign up to the WindowsNetworking.com Real time article update newsletter.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top