Nested virtualization: Getting a competing hypervisor to work in Hyper-V

Ever since Microsoft began supporting virtual machine nesting for Hyper-V, I have occasionally gotten email messages from people asking if it is possible to run VMware ESXi inside of a Hyper-V virtual machine. The short answer to this question where you would nest a competing hypervisor is that it can be done, but doing so is not quite as easy as you might assume that it would be. That being the case, I considered writing an article explaining how to make VMware work inside of a Hyper-V virtual machine, but ultimately decided against doing so because there are already detailed blog posts outlining the process on various websites. What I thought might be more beneficial it to take a more general approach to nested virtualization and talk about some of the things to watch out for when you nest one hypervisor inside of another.

Consider your reason for using nested virtualization

The first thing that you should do prior to attempting to virtualize a hypervisor is to consider your reason for doing so. There are countless reasons why an organization may wish to run one hypervisor inside of another. The problem is that doing so typically is not going to be officially supported. I seriously doubt for example, that VMware or Microsoft would officially support the running of ESXi inside of a Hyper-V virtual machine, even though it can be done. In fact, Microsoft even states that “virtualization applications other than Hyper-V are not supported in Hyper-V virtual machines, and are likely to fail”

As such, I would not recommend using such a nested virtualization architecture in a production environment. If on the other hand, you are working in a lab environment and have limited hardware resources available, then multivendor hypervisor nesting may prove to be a viable option.

On the flip side, Microsoft does fully support virtualizing Hyper-V inside of a Hyper-V virtual machine. In fact, I saw a demo at Microsoft Ignite last year in which someone created a large VM in Azure, enabled nested virtualization, and then installed Hyper-V. This allowed for the creation of true Hyper-V virtual machines inside of an Azure environment.

Review the base hypervisor’s requirements

One of the most important things that you can do prior to creating a multivendor nested hypervisor deployment is to take the time to review the requirements for nesting the base hypervisor. If for example, you are going to be using Hyper-V as the base hypervisor, then be sure to read the requirements for nested Hyper-V. Those requirements can give you clues as to the issues that you may run into when virtualizing another vendor’s hypervisor. For example, when Hyper-V is running inside of a virtual machine, the virtual machines that are running on top of the virtualized hypervisor lose support for dynamic memory. Even though this is a Hyper-V limitation, you may find that the issue also impacts other hypervisors that you might virtualize.

Think about driver requirements

One of the biggest issues that you will typically have to work around when nesting hypervisors from different vendors is that of driver support. To show you what I mean, let’s go back to the example of running VMware ESXi inside of a Hyper-V virtual machine.

Hyper-V, like other hypervisors, presents virtual machines with an abstracted view of the system’s hardware. In the case of the network driver for example, Hyper-V communicates with the physical network adapter using a normal Windows driver for that NIC. However, Hyper-V exposes the network adapter to the virtual machines as a virtual network adapter that is tied to a Hyper-V virtual network switch. This means that regardless of what type of network adapter is physically installed into the Hyper-V server, the virtual machines will need a driver for either the Microsoft Hyper-V Network Adapter or the Microsoft Legacy Network Adapter. You can see an example of this in the figure below.

Nested virtualization

The problem with this is that Type 1 hypervisors are designed to run on bare metal, and therefore do not typically include virtual network adapter drivers (aside from the ones required for self-nesting). The way that this problem has been dealt with in most of the nested virtualization blogs that I have read is by creating an ISO file containing the hypervisor that needs to be virtualized, and then injecting the required driver files into the ISO file.

Although not technically a requirement, some people have reported having fewer issues and better performance by taking advantage of discrete device assignments. Discrete device assignments were introduced in Windows Server 2016 as a way of enabling physical hardware pass-through for a specific virtual machine. In the case of nested virtualization, for example, a PCIe based network adapter could be mapped directly to the VM that is running the virtualized hypervisor. The nice thing about using discrete device assignments for network adapters is that doing so can potentially eliminate the need for a virtual device driver. According to Microsoft, “once the device is mounted inside the guest, the manufacturer’s device driver can now be installed like normal inside the guest virtual machine.”

It is worth noting that the use of discrete device assignments imposes some limitations on the Hyper-V virtual machine to which the device is assigned. Such virtual machines do not support save/restore, live migration, or the use of dynamic memory. Furthermore, the virtual machine cannot be added to a failover cluster.

Trial and error

Making a competing hypervisor work inside of a Hyper-V virtual machine is not a trivial process. Even after the virtualized hypervisor is installed, you will probably find that a significant degree of trial and error is required in order to make its virtual machines run in a reliable manner.

When creating multivendor nested hypervisor environments, some people have found that the nesting has to be enabled for the virtual machines that are running on top of the nested hypervisor. In those particular cases, the nested hypervisor runs without issue, but the virtual machines failed to start until nesting was enabled at the VM level. This happens because the VMs were running on a hypervisor that is running on top of another hypervisor.

Featured image: Shutterstock

About The Author

2 thoughts on “Nested virtualization: Getting a competing hypervisor to work in Hyper-V”

  1. One other drawback in many situations is that the nested VMs will only run 32 bit OSes. Running a 64 bit nested VM, inside a virtual 64 bit hypervisor, inside a physical 64 bit host will likely not work.

    If doing this for training purposes that won’t be a problem. The nested VMs can be little 32 bit Linux machines.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top