Welcome to Hyper-V 3.0 (Part 2) – A Hyper-V 3.0 virtual machine deep-dive

If you would like to read the first part in this article series please go to Welcome to Hyper-V 3.0 (Part 1) – A Hyper-V 3.0 virtual machine deep-dive.


A few months ago, Microsoft made available the first preview of the upcoming Windows Server 8 product, which includes a first real look at Hyper-V 3.0. Much has been written about the new features and capabilities coming in Hyper-V 3.0, but I wanted to start seeing it in action. So, I bought a new server for my lab (a Dell PowerEdge R410), installed the Windows Server 8 Developer Preview and went to town.

My lab server has the following stats:

  • 2 x quad core Inter Xeon 5503 processors.
  • 32 GB RAM.
  • 2 x 146 GB 10K RPM SAS disk.
  • Connected to an EMC VNXe iSCSI-based storage array.

In part 1 of this series, you got to see the full process of creating a new virtual machine. At the end of part 1, we had a fully functioning VM. In this part of the series, we will walk through all of the configuration options that are at your disposal with regard to configuring the new virtual machine. We’ll go screen by screen through each configuration option. Sound good?

Virtual machine configuration

When you open up the properties page of a virtual machine, the first option you have is to add additional hardware to the virtual machine. You can see this screen in Figure 1. On this page, you can add any of the following:

  • SCSI controller. With Hyper-V 3.0, you can add virtual hard disks to a SCSI controller or to a Fibre Channel adapter to a virtual machine in order to increase the amount of storage that’s available on the server. You cannot boot from a virtual disk that’s hosted on a SCSI controller. Only a hard disk attached to an IDE adapter can host an operating system.
  • Network adapter. In general, use the non-legacy network adapter option. Integration Services installs bits that are necessary for the non-legacy network adapter to operate under operating systems that do not provide automatic support for this virtual adapter. Windows Server 2008 and the R2 release already include support for the Network Adapter. The synthetic driver that supports the more modern and virtualization-friendly Network Adapter is orders of magnitude more efficient than the legacy network adapter option.
  • Legacy network adapter. If the virtual machine needs to boot using a Pre-eXecution Environment (PXE) or if your operating system requires access to the network before you’re able to install the Hyper-V Integration Services. Likewise, if you need to install your virtual machine over the network, you’ll need to use the legacy network adapter. The legacy network adapter uses software to emulate a well-supported network adapter (DEC 21140). However, software emulations costs in the form of processing resources, making the legacy adapter less efficient than the high speed Network Adapter option. Note that many 64-bit operating systems do not provide native support for the legacy network adapter, but you may still need to use it for PXE purposes.
  • Fibre Channel adapter. The Fibre Channel adapter is available only after you’ve installed integration services in the virtual machine. You cannot boot from a virtual hard disk hosted on a Fibre Channel adapter. Only a hard disk attached to an IDE adapter can host an operating system.
  • RemoteFX 3D video adapter. If you’re planning to deploy Hyper-V for VDI, you may consider adding the RemoteFX 3D video adapter to your virtual machine in order to increase multimedia performance on VDI endpoints.

Figure 1: Add new hardware to a virtual machine

The BIOS configuration page gives you a place to change the way that the system boots. You can force the system to enable the Num Lock key and you can change the order in which the system checks for boot media. If you’re planning to install an operating system from a virtual CD image, for example, CD needs to be before IDE, which is the default. Note that this list has two other entries as well. Can you boot from a legacy network adapter, which enables the system to boot from PXE or you can boot from a virtual floppy image.

Figure 2: Configure the virtual machine’s BIOS options

You have to have RAM assigned to your virtual machine. Hyper-V 3.0 adds a number of configuration options when it comes to RAM. With Hyper-V 3.0, Microsoft is working to close the feature gap between it and VMware. For years, VMware users have enjoyed the company’s memory overcommit feature which is actually a group of features that includes transparent page sharing but that basically enables an administrator to assign more RAM to virtual machines than is actually present in the host. This is accomplished by having virtual machines effectively share common memory pages once.

With Hyper-V 3.0, you’re able to modify Dynamic Memory settings even while the virtual machine is running. This helps provide additional availability to running workloads.

In fact, there are quite a few options at your disposal when it comes to managing RAM in your new virtual machine:

  • Startup RAM. This is the amount of RAM that is initially assigned to the virtual machine.
  • Enable Dynamic Memory. If you’d like to subject the machine to dynamic memory, which allows the amount of RAM to grow and shrink as workload demands permit, select this checkbox.
  • Minimum RAM. This is the RAM floor. Hyper-V will not allow RAM to go below this lower limit.
  • Maximum RAM. Conversely, this is the maximum amount of RAM that will be assigned to the virtual machine.
  • Memory buffer. To keep performance at reasonable levels, you can configure a buffer so that RAM doesn’t swing too wildly.
  • Memory weight. If memory contention becomes an issue, how important is it that this virtual machine be granted the memory that it needs?

Figure 3: Modify the memory configuration

Hyper-V also includes some settings that have to do with NUMA.

NUMA stands for non-uniform memory architecture. In simplistic terms, NUMA allows for greater levels of scalability than traditional hardware expansion options, such as SMP. NUMA accomplishes this goal by eliminating what is a single choke point in traditional computing architecture: the memory bus. By adding additional memory busses, a system can be scaled to greater heights. However, it is also adds some design complexity. Each node in a NUMA architecture is generally considered a block of memory and the processors and I/O devices that are on the same physical bus as the aforementioned memory. Individual nodes are then aggregated through the use of an interconnect bus. That’s about as far as I’m going to go on NUMA except to say that this architecture basically breaks down a larger system into smaller chunks, which allows for the scalability that I mentioned.

In Hyper-V 3.0, virtual machines can be configured to spam multiple NUMA nodes. In this case, the virtual machine would consume RAM from both the local node as well as from remote nodes. This allows the virtual machine to access more memory than would otherwise be possible, but there is the potential for a performance impact since traffic will need to traverse multiple memory controllers and the interconnect bus.

Figure 4: Modify NUMA memory settings

In Figure 5, you’ll see the Processor configuration page showing that there is 1 virtual CPU assigned to this virtual machine.

  • Virtual machine reserve (percentage). Allows you to reserve a portion of the server’s total processing resources for this virtual machine. Consider this scenario: This virtual machine is running a mission-critical workload. You always want CPU resources to be available to serve this VM’s workload. By default, a virtual machine is guaranteed/reserved 0% of the host’s resources. As an administrator, you can set this to a non-zero value to reserve resources.
  • Virtual machine limit (percentage). On the flip side, you can also limit how much of a host’s processing resources can be consumed by a single virtual machine. This setting is useful for instances in which a virtual machine might be attempting to consume too many resources and you want to stop this behavior.
  • This setting is basically the opposite of the Virtual Machine Reserve setting. Rather than guaranteeing a minimal level of CPU resources, this setting prevents the virtual machine from consuming an excessive amount of the available CPU resources. Note that the default value is 100%. Hyper-V does some math for you, though. Note below in Figure 5 that the machine is allowed only 25% of available host resources. This is because I’ve assigned one vCPU (one core of four) to the virtual machine, which equates to 25% of total resources.
  • Relative weight. If the two settings above are a bit too exacting, you can take a different approach to determining how much processing power should be consumed by the virtual machine. The relative weight option allows you to weigh the important of this virtual machine against others. By default, every virtual machine gets a weight of 100. If a VM should have lower priority, provide a lower number.

Figure 5: Change the number of virtual processors

Hyper-V also allows you to specify some processor compatibility settings, as shown in Figure 6. These settings are especially useful if you’re running Hyper-V on different versions of the same processor family.

  • Migrate to a physical machine with a different processor version. This option should not be construed as enabling the ability to seamlessly migrate virtual machines between different processor platforms, such as from AMD to Intel and vice versa. Rather, it means that you can migrate a virtual machine from an older processor to a newer one as long as the processors are from the same vendor. When migrating between hosts with supported processors, no restart of the virtual machine is necessary.
  • Run an older operating system, such as Windows NT. The second checkbox, Run an older operating system, such as Windows NT, hides unsupported processor features from older operating systems so that they can run under Hyper-V. Bear in mind that this option simply gets these operating systems to function. It does not mean that Microsoft will support them.

Figure 6: Change processor compatibility settings

I previously discussed NUMA, so I’m not going to repeat it here, but, for completeness, wanted to include Figure 7.

Figure 7: Change processor NUMA settings

As mentioned in Part 2, you can’t boot a Hyper-V virtual machine from a SCSI- or Fibre Channel-based virtual hard disk, so IDE remains a critical component and gets a configuration page like the one shown below in Figure 8. You’re able to add either virtual hard drives or virtual DVD drives to your virtual machine. Do so by clicking the Add button on the configuration page. You can add up to two devices per IDE controller.

Figure 8: Configure the IDE controller

If you have an OS installed inside your Hyper-V virtual machine, you probably have an IDE-based virtual hard disk installed like the one shown below in Figure 9.

Controller. To which virtual controller would you like to attach the device? You can choose from any installed IDE, SCSI or Fibre Channel controllers.

Location. On which controller port/location should the new device be attached?

Media. Do you want to connect virtual media – for example, a virtual hard drive – to this device or will this be configured as a pass-through to physical hardware?

Sometimes, a service needs direct access to physical disks and won’t support the use of a VHD/VHDX. Or, the service simply needs so much disk space that presenting a physical volume makes more sense. For example, while I was creating my Data Protection Manager 2010 course for Train Signal, I needed to use a storage volume that was huge in size and based on physical – not virtual – disks.

A pass-through disk allows you to mount a physical volume to a Hyper-V virtual machine. In my DPM course, I used a pass-through disk to act as the storage pool device that I needed to use for my lab scenarios. There are, however, some serious drawbacks to the use of pass-through disks.


  • Carries a performance gain over virtual disks since there is no abstraction taking place. The virtual machine has direct access to the disk.


  • Not portable at all. It’s tougher to move the storage to another server.
  • You cannot take snapshots of a pass-through disk.
  • The Hyper-V VSS writer cannot back up a pass-through disk. If you’re backup software uses this common protection technique, you’ll need to find alternative methods – such as installing a backup agent inside the virtual machine itself – for protecting the contents of the pass-through disk.

Figure 9: Change the settings for the selected hard drive

I won’t say too much about the CD/DVD drive option shown in Figure 10. This is a lot like what you just saw regarding virtual hard drives.

Figure 10: Change the configuration of the DVD drive

If you have a virtual SCSI adapter installed inside your Hyper-V virtual machine, you can add hard disks to this adapter; up to 64 hard disks per adapter are allowed.

Figure 11: Make necessary changes to the included SCSI controller

The network component is another critical part of your virtual machine’s configuration and its properties page can be found in Figure 12.

  • Virtual switch. Choose the virtual switch to which this virtual network adapter should be connected. Virtual switches can be managed using the Hyper-V Manager.
  • VLAN ID. Enable virtual LAN identification. If you want to enable VLAN identification, select this checkbox and provide a VLAN value in the box provided.
  • Bandwidth management. Enable bandwidth management. You can control how much bandwidth is consumed by this virtual machine by enabling the appropriate checkbox and then providing values for both minimum and maximum bandwidth in megabits per second.

Figure 12: Configure network settings

There are also a number of hardware-based acceleration features you can choose when you’re working with the right hardware. These features are:

  • Virtual Machine Queue (VMQ). VMQ is a feature which, when enabled, creates a dedicated queue on a host’s physical network adapter for each virtual network adapter that has requested a queue. VMQ can reduce processing overhead in packet routing and make the networking process more efficient. VMQ is most useful for virtual machines that have a heavy network workload. Since they should be considered a scarce resource, you shouldn’t enable VMQ for every virtual machine.
  • IPsec Task Offload. When a network adapter supports the feature, IPsec task offload can reduce some of the processor performance hit associated with IPsec encryption algorithms. This can improve overall server scalability.
  • Single-Root I/O Virtualization (SR-IOV). SR-IOV allows the system to partition PCIe-based hardware resources into virtual interfaces. In short, a single PCIe device can be made to appear as multiple devices. I’ll provide more information about SR-IOV in a future post.

Figure 13: Manage network adapter hardware acceleration features

In Figure 14, you’ll see that there are a number of advanced networking features at your disposal.

  • MAC address. You can choose between a dynamic and a static MAC address for this virtual machine. If you’re having MAC address issues, try using a static address.
  • Enable MAC address spoofing. In some configurations, such as when using Network Load Balancing, you may have to spoof the MAC address in order for the service to operate.
  • DHCP Guard. Allows administrators to protect Hyper-V virtual machines from rogue DHCP servers.
  • Router Guard. Allows administrators to protect Hyper-V virtual machines from rogue router advertisements.
  • Monitor Port. Enhance troubleshooting by monitoring the network traffic than enters and exits a virtual machine.

Figure 14: Manage network adapter advanced features

The serial port is certainly a legacy holdover, but is still important. In Figure 15, you can see the settings that are provided to manage serial ports. The virtual COM port can either be connected to nothing or connected via a named pipe to a remote computer.

Figure 15: Change serial port settings

Finally, you have the option to manage how the floppy drive in the virtual machine will behave, if it’s installed at all. By default, the floppy drive has no media, but you can also create and use a virtual floppy disk if you need to do so.

First of all, you need a blank virtual floppy disk. To create a new blank virtual floppy disk, open the Hyper-V Manager. Go to the Actions menu and choose New > Floppy Disk (Figure 17).

This opens the Create Virtual Floppy Disk dialog box that you see below in Figure 18. In this window, provide a file name that you’d like to associate with the new blank floppy image.

With the new floppy image created, you now need to “insert” the image into one of your virtual machines. Open the settings page for the target server and choose the Diskette Drive option. On the properties page for that device, choose the Virtual floppy disk (.vfd) file option and locate the floppy image. Once you’ve done so, click the Open button.

You can now use the new floppy disk image just as you would a real blank floppy.

Figure 16: Change the media that’s loaded in the virtual floppy drive

Figure 17:
Create a new floppy image

Figure 18:
Name the new floppy image


And that, folks, is a complete look at the configuration options you have with a typical Hyper-V 3.0 virtual machine!

If you would like to read the first part in this article series please go to Welcome to Hyper-V 3.0 (Part 1) – A Hyper-V 3.0 virtual machine deep-dive.

Leave a Comment

Your email address will not be published.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top