How to Successfully Create a Hyper-V Cluster Using Virtual Machine Manager (Part 2)

If you would like to read the other parts in this article series please go to:


System Center Virtual Manager is a complex datacenter management product. SCVMM has been designed to handle almost all aspects of an IT datacenter. You can define physical networking components in SCVMM with the help of VM Networks. SCVMM also allows you to manage SAN storage and file server clusters. The first part of this article series explained the requirements for VMM Host group and Hyper-V hosts before you can build a Hyper-V cluster via VMM. The second part will explain the requirements for shared storage and networking.

Hyper-V Shared Storage Requirements

Since Hyper-V Cluster requires shared storage, it is necessary that the shared storage requirement is met before you proceed with the Hyper-V cluster creation. VMM allows you to connect to iSCSI and Fibre Channel SANS and have these storage arrays managed by the VMM. The storage pools from storage arrays can be used by Hyper-V hosts managed by the VMM. For example, you can assign the storage pools when creating a Private Cloud, when deploying virtual machines to a host and when deploying a Hyper-V cluster. There are two ways to allocate shared storage to Hyper-V hosts before you start the Hyper-V cluster creation wizard:

Shared storage managed and allocated by SCVMM: If you want to use shared storage, which is managed by SCVMM, please keep the following points in mind:

  • The required shared storage arrays have to be discovered and classified in the Fabric workspace. If you have not done so, please go to Fabric workspace, right click on the “Storage” node and then click “Add Storage Devices” as shown in the below figure:

Figure 1

  • If you are using iSCSI SAN or Windows iSCSI Target, please make sure iSCSI Initiator service is installed and started on VMM host. This is required in order to successfully create a connection to the iSCSI SAN server from within the VMM console. In this article, we are using Windows iSCSI Target installed on a Windows Server 2012 R2 server. Once the storage array has been added to the VMM, you will see all the available storage pools when you click on “Classifications and Pools” node in the Fabric workspace as shown in the figure below:

Figure 2

Windows iSCSI Target server examines all the local drives and adds them to the storage pools. In this case, I have four local drives created on the Storage Server and each drive is considered a storage pool. You can also see the size of each storage pool and available space.

  • Logical units have been created and allocated to a host group where hyper-V hosts reside. Let’s make this point clear. You can allocate storage pools and logical units only on a VMM Host group. Allocated storage pools can only be used by the VMM for virtual machine placements. Allocated logical units are used by both cluster and virtual machines. It is not necessary to allocate the storage pools, but you must allocate the logical units on the property of VMM Host group if you want the cluster creation wizard to display the available logical units. Cluster creation wizard will look for the logical units and not the storage pools you have allocated to the VMM Host group. Before you allocate the logical units, please make sure that you have created them using the “Create Logical Units” ribbon found in the Fabric workspace. Once logical units are created, right click on the VMM Host group where Hyper-v nodes reside and click “Allocate Logical Units” as shown in the figure below:

Figure 3

The logical units which you create from the shared storage pool must not be assigned to any Hyper-V host. Hyper-V cluster creation wizard in VMM can only detect logical units which are not assigned to any of the Hyper-V hosts in the VMM. As you can see, in the figure below there are several logical units allocated to the VMM Host group, but few have been reserved for the cluster creation.

Figure 4

You can also confirm from the Fabric workspace to make sure logical units used for the Hyper-V cluster are not assigned to any of the Hyper-V host by navigating to the “Classifications and Pools” node as shown in the figure below:

Figure 5

Hosts managed shared storage: If you want to use shared storage, which is managed by the Hyper-V hosts, you must make sure that the logical units have been created on one of the Hyper-V hosts and formatted with NTFS partition. This is the traditional approach to follow when building a Hyper-V cluster using the Failover Cluster Manager. The shared storage in this case must be assigned and available on all the Hyper-V hosts before you start the Hyper-V Cluster creation wizard via VMM.

Hyper-V hosts Networking Requirements

Failover clustering requires that you add the required network adapters to each Hyper-V host. Apart from adding the network adapters you must also understand how VMM is going to treat networking on the Hyper-V hosts when creating Hyper-V cluster via VMM. Hyper-V Cluster Creation wizard also provides the opportunity to assign an IP address to the cluster apart from assigning a cluster name. The IP Address page might or might not appear depending on the following conditions:

  • If you are using static IP configuration on each Hyper-V host, please make sure that at least one physical network adapter on all Hyper-V hosts belongs to the same IP subnet. The physical network adapters must also be configured with a default gateway for the IP Address page to appear during the cluster creation.
  • If you have configured Hyper-V hosts to obtain IP configuration from a DHCP Server which is not recommended in the production environment, the cluster creation wizard will not provide you an opportunity to assign an IP address to the cluster. In other words, the IP Address page will not be shown. Since Windows Server 2012 and later Operating Systems support assigning a cluster IP Address from a DHCP server, the cluster creation wizard will automatically select an available IP Address from the DHCP Server rather than showing you the IP Address page.

Whichever method you use to assign IP Addresses to Hyper-V host network adapters, make sure these network adapters are available for placement. To confirm whether network adapters are available for placement, go to property of all the Hyper-V hosts, click “Network” button and then click on the network card to see the setting as shown in the figure below taken from a Hyper-V host:

 Figure 6

Virtual Switch: Hyper-V cluster creation wizard allows you to create a virtual switch on each Hyper-V host automatically. It is not necessary to have virtual switches created during the cluster creation wizard. You can also create the virtual switches on each Hyper-V host beforehand, but creating virtual switches via VMM keeps the configuration consistent and identical. Before the Virtual Switch page can show you the VM networks, you must ensure that the physical network adapters are assigned to use a VM Network in the property of the Hyper-V host. To make sure VM Networks are assigned to the Hyper-V host, go to the property of each Hyper-V host, click on the “Hardware” button and then click on the physical network adapter as shown in the figure below.

Figure 7

As you can see in the figure above, the physical network adapter is associated with the “Corp_Net” logical network which is my test lab. If physical network adapter is not associated with any of the logical networks, the Virtual Switch page will not allow you to create a virtual switch automatically on the Hyper-V hosts.


You can see how complicated it is before actually starting the Hyper-V Cluster Creation wizard via VMM. This article explained how to configure shared storage and networking to enable us to deploy a Hyper-V failover cluster successfully via VMM. In the next part of this series, I will walk you through the Hyper-V Cluster Creation wizard.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top