I recently wrote an article about some of the best practices that I have personally adopted for Hyper-V hosts and thought that it might be fun to write a similar article about my best practices for Hyper-V VMs. The main reason why I wanted to write this article is because there are a lot of contradictory Hyper-V best practices, and so I wanted to talk about what has worked for me. Of course, entire books have been written about Hyper-V, so I can’t even come close to talking about all of the best practices, but I do want to discuss some of the ones that I consider to be the most important.
The first best practice that I want to talk about is the use of dynamic memory. Although I have nothing against the use of dynamic memory, I personally don’t use it. The main benefit of using dynamic memory is that it allows you to increase your virtual machine density, thereby making it possible to run more virtual machines on a host than might otherwise be possible. The flipside to this, however, is that you can end up in a situation in which host memory is over-provisioned.
Although memory over-provisioning can have issues of its own, the bigger reason why I tend to avoid using dynamic memory is because dynamic memory use can sometimes undermine an organization’s high availability efforts. If the hosts within a clustered Hyper-V deployment are running an excessive number of virtual machines, then it is possible to end up in a situation in which a host within the cluster fails and the remaining nodes lack the resources to absorb the virtual machines that had previously been running on the failed host.
My advice is to use dynamic memory if your goal is to maximize virtual machine density. However, I recommend avoiding the use of dynamic memory (or using it carefully) if you need to adhere to strict SLAs for workload availability.
There is also an entire laundry list of best practices related to the use of checkpoints. Although it has been said many times, checkpoints should not be used as a replacement for regular backups. Remember, checkpoints do not actually create copies of your virtual machines. When you create a checkpoint, what you are actually creating is a differencing disk. Write operations are redirected to this differencing disk, leaving the contents of the VM’s original virtual hard disk unchanged. This is why checkpoints allow for instant recovery. If a VM is rolled back, then it simply goes back to using its original virtual hard disk in place of the differencing disk.
Although checkpoints do have their place, I recommend using them sparingly. There are several reasons for this, but one of the most important reasons is that checkpoints negatively impact virtual machine read performance. If a virtual machine only has one or two checkpoints you might not notice the performance impact, but checkpoints have a cumulative effect on virtual machine read performance. The more checkpoints that exist for a virtual machine, the greater the impact that those checkpoints will have on the VM’s performance.
If you are planning to create checkpoints for a virtual machine, then I recommend not only limiting the total number of checkpoints that exist at any one time, but also storing the checkpoints on physical storage that performs at least as well as the virtual machine’s primary storage. Storing checkpoints on a low-speed storage tier can greatly diminish virtual machine performance, even if a VM only has a couple of checkpoints.
Another best practice that I wanted to be sure to mention is that of generating virtual machines from a template, as opposed to setting up VMs manually. There are a few different reasons why it is important to use templates, even beyond the fact that using templates can save you a lot of time.
The first reason why I recommend generating VMs from templates is that creating new VMs from templates greatly reduces the possibility of human error. You won’t have to worry for example, about a virtual machine accidentally getting connected to the management network instead of the virtual machine network, or being provisioned with the wrong amount of memory.
The bigger reason why you should create VMs from templates is because any VMs that are created from a template will include the same patches that were applied to the template VM. This means that as long as you keep your templates up-to-date, then the VMs that are created from those templates will already be fully patched at the time of creation. This will help to reduce the load on your Hyper-V host since you won’t have to worry about installing large numbers of patches to your new VMs. More importantly, though, this approach ensures that you will not be placing unpatched VMs onto your production network.
When it comes to creating Hyper-V VMs, one thing that you always have to think about is the issue of resource contention. Server virtualization is based around the idea that multiple virtual machines can share a finite pool of physical hardware resources. Virtual machines perform well so long as these resources are shared equitably, but performance begins to suffer when resources are stretched too thin.
Any type of resource shortage can lead VMs to perform poorly. However, the one resource that is the most likely (at least based on my own experience) to be the source of VM performance problems is storage. Simply put, the storage subsystem must be able to deliver a sufficient number of IOPS to accommodate the needs of the various virtual machines.
You will obviously have to base your storage architecture on your VM’s IOPS requirements. As you do, however, it is a good idea to use a RAID 10 array. This array structure gives you the performance of disk striping, but with the redundancy of mirroring. Also, if storage performance is important to you, then consider using thick provisioning for your virtual hard disks.
Hyper-V VMs best practices for best results
Countless best practices can be applied to Hyper-V VMs. The ones that I have discussed in this article are best practices that I have found to make the biggest differences in the performance and availability of my Hyper-V VMs.
Featured image: Shutterstock