Enabling Physical GPUs in Hyper-V

In recent months, I have been noticing two Hyper-V trends that seem to be completely at odds with one another. On one hand, I have been noticing a lot of people transitioning away from virtual servers that are set up to use the full blown desktop experience, and making the move to Server Core deployments instead. This transition makes a lot of sense, because it helps admins to utilize hardware resources as efficiently as possible. My guess is that over the next year, this trend will go even further and we will see a lot of workloads migrated to Nano Servers or even to containers.

The other trend, which is completely at odds with the first trend, is that I have also been seeing organizations virtualizing workloads that are increasingly graphically intensive.

Hyper-V’s default configuration really isn’t well suited to running graphically intensive workloads. However, it is possible to configure Hyper-V to leverage a server’s physical GPUs. That way, graphical rendering can be offloaded to dedicated video hardware.

Before You Begin

Before I show you how to configure Hyper-V to use GPU acceleration, there are a few gotchas that I need to warn you about. First, GPU acceleration is based on RemoteFX, which is part of the Remote Desktop Service. Microsoft requires organizations using the Remote Desktop Services to deploy an RDS licensing server, and to purchase the required number of Client Access Licenses. You can operate without a licensing server for a while, but the Hyper-V host will display this warning:

Image

The next thing that you need to be aware of is the fact that not every Hyper-V virtual machine can take advantage of GPU acceleration. Obviously, guest operating system support is required, but there is more to it than that. When you create a virtual machine in Hyper-V, you are asked whether you would like to create a Generation 1 virtual machine or a Generation 2 virtual machine. Generation 2 virtual machines do not include an option to add a RemoteFX 3D Video Adapter. The option exists only for Generation 1 virtual machines.

Another consideration is live migration and failover clustering. If a virtual machine is configured to use GPU acceleration, then any Hyper-V host that could potentially host the VM must be equipped with similar video hardware. Furthermore, hosts must have a sufficient number of GPUs available to accommodate any inbound virtual machines.

Finally, some documentation indicates that once a Hyper-V virtual machine has been configured to use RemoteFX, then the VM becomes accessible only through an RDP session, and not through the Hyper-V Manager console. This limitation might have existed at one time, but does not appear to be an issue today. The figure below shows a Windows Server 2016 (preview 5) VM running on a Windows Server 2012 R2 Hyper-V host. As you can see in the figure, the VM is configured to use the RemoteFX graphics device, and yet I am able to view it through the Hyper-V Manager’s console.

Image

Configuring Hyper-V

You can access the Physical GPU settings by opening the Hyper-V Manager, right clicking on the Hyper-V host server, and choosing the Hyper-V Settings command from the shortcut menu. Upon doing so, you will be taken to the Hyper-V Settings dialog box for the selected host server. As you can see in the figure below, this dialog box contains a Physical GPU container that you can use to enable a physical GPU for use with Hyper-V. As you look at this dialog box however, you will notice that its configuration options are greyed out.

Image

The first step in providing Hyper-V with GPU support is to check your video hardware configuration. In Windows Server 2012 R2, you can do this by right clicking on the Start button, and selecting the System option from the shortcut menu. When the System dialog box appears, click on the Device Manager link and expand the Display Adapters node. As you can see in the figure below, this server is configured to use the Microsoft Basic Display Adapter. This configuration is fairly common for server hardware, but does not provide good GPU support.

Image

In this type of situation, it is necessary to determine the actual video hardware that is installed in your Hyper-V host server, make sure that the video adapter is equipped with a suitable GPU, and download a new driver if necessary. If you look at the figure below for example, you can see that after installing the correct driver, Windows went from identifying the driver as a generic Microsoft Basic Display Adapter, to correctly identifying the adapter as a NVIDIA GeForce GTX 750.

Image

If I open the Hyper-V Manager, Hyper-V still does not make the GPU available for use. If you look at the summary information in the dialog box below however, you will notice that the Remote Desktop Virtualization Host role service must be installed.

Image

You can install this role service by using PowerShell if you like, but if you prefer to use the GUI then it is easy enough to install the role service by using the Server Manager. To do so, open Server Manager, and select the Add Roles and Features option from the Manage menu. This will cause Windows to launch the Add Roles and Features Wizard.

Click Next to skip the wizard’s Before You Begin screen. You will now be taken to the Installation Type screen. Select the Role-Based or Feature-Based Installation option and click Next.

You will now be prompted to choose the server on which you wish to install the role. Choose the Select a Server from the Server Pool option. Make sure that the correct server is selected, and click Next.

You should now see the Select Server Roles screen. Select the Remote Desktop Services role, and click Next. Click Next again to bypass the Features screen, and once again to bypass the Remote Desktop Services introduction.

The next screen that you will see asks you to select the role services that you wish to install. Select the Remote Desktop Virtualization Host checkbox, as shown below. If prompted to install the Media Foundation and the Remote Server Administration Tools, be sure to click the Add Features button.

Image

Click Next, followed by Install, and the required role services will be installed onto the server. When the process completes, click Close. You will need to reboot the server in order to finish the installation.

After the machine reboots, you can go back into the Hyper-V Manager, right click on the host server, and choose the Hyper-V Settings command from the shortcut menu. When the Hyper-V Settings dialog box appears, select the Physical GPUs container. This time, you should see the GPU listed, as shown in the figure below.

Image

Now, click OK, and then right click on the virtual machine for which you want to enable GPU acceleration, and choose the Settings command from the shortcut menu. When Windows opens the Settings dialog box, select the Add Hardware container, select RemoteFX 3D Video Adapter as shown below, and click Add.

Image

You will also need to set the number of monitors that will be supported by the VM and the maximum display resolution, as shown below.

Image

As you can see, it is relatively easy to add GPU acceleration to a virtual machine. It is worth noting however, that RemoteFX acceleration incurs licensing costs and does not work for every virtual machine.

About The Author

14 thoughts on “Enabling Physical GPUs in Hyper-V”

  1. Greetings! I’m looking for assistance with ha challenge that represents the next step on this subject path. Feel free to point me to another thread as appropriate.

    Background: I’ve got a Host Laptop running Windows 10 X64 Pro w/ Hyper-V enabled. The laptop has a discrete nvidia 1060 graphics card.

    I’ve created a local Hyper-V Windows 10 Pro X64 Guest OS client, configured to use the RemoteFx Video Adapter

    Issue: I’m attempting to install software on the Guest OS, that is looking for a ‘compatible’ video subsystem and is failing to run, because it can’t find one. This same software runs on the Host without issue.

    Question: Is there a way to ‘spoof’ software installed in a Guest OS into believing it is communicating with the Host GPU type installed in the Host?

    Thanks in advance for any/all assistance!

  2. I have the same setup. Win10pro Guest running on top of Win10pro host. Problem: youtube videos 480p and up) have quality problems when running on Guest but run Ok on Host. But the same video runs Ok in HD in guest windows 2016 multipoint server(running also on top of host windows 10 pro). All guests have Remotefx card activated. Anyone knows why the video quality on windows 10 pro guest is of lower quality than video run on Guest Win 2016 server?

  3. That is really interesting Ruben. I have to confess that I am baffled by that one. I will ask around a bit. If I find out anything useful, I will post another comment.

  4. I am trying to configure a Hyper-V on windows server 2016 with 4 Nvidia 1080 for deep learning. Is it possible that I can install 3-4 instances of ubuntuOS on the machine for multiple research and also make for eg 2GPU on system1 and 1 for other 2 system2 and system3 respectively?

  5. I haven’t tried subdividing a graphics card’s GPUs among multiple virtual machines, but I am pretty sure that I read somewhere that it doesn’t work. If you try it, please post a comment and let me know what happened. I would love to know.

  6. Sorry to inform but it was a failure.

    Findings:-

    1. The RemoteFX 3D settings has limitation of 1024MB per VM.
    2. RemoteFX supported my Nvidia GeForce 1080 cards(all four of them).
    3. VVI thing to remember is that “The host OS installed on the VM must support the RemoteFX for it to actually use the GPU.

    next I tried to install VMware ESXi 6.5.0 as well but it didn’t support the 1080’s GPU cards.

    Any Hypervisor suggestion for my requirement or blog/forum which can be of any help to me?

    1. Akhilesh to do what you want you will need to buy Enterprise graphics cards Grid K2’s at least if you want to split the ram between systems. Grid cards are what are known as VGPU and then you will have to shell out for the Enterprise nVidia graphics driver as well.

      The first place I would point you to is the DDA Dev Blog from Microsoft.
      https://blogs.technet.microsoft.com/virtualization/tag/dda/

      Unfortunately most GPU’s have to have the enterprise drivers AND support the vGPU feature like the nVidia Grid Cards to be divided amongst multiple machines.

      DDA works like GPU Passthrough and would let you pass each card directly through to the VM. Right now I’m writing to you on a Ubuntu Desktop with a 550 TI Passed Through using GPU Passthrough on the Ubuntu hosts… I have 3x 40″ 4k TV’s running. 2 on a Windows 10 Enterprise Host with a 1050 TI and 1 on my old 550 TI 2GB card. I use Synergy to control all 3 screens from the Windows Session.

  7. LMFAO LOL

    Why can’t people just say they are trying to play a Healer or a Tank in a Virtual Machine on Hyper-V so that they can by pass that 1 mmo client – 1 pc limitation?

    I am trying to VM a MMO and the damn Hyper-V won’t run the mmo game due to lack of 3D support, Nvidia driver/GPU?

    What I’ve read from your thread is I need:
    a RDP license
    Client license
    and once I do the RDP thingy I have to access the Hyper-V vm with the remote client application….

    GRRRRRRRRR WTF!

    LOL.

  8. I am most interested in being able to enable RemoteFX USB Device Detection on a VM on a hyper-v host which I access remotely from a local pc is this possible?

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top