If you would like to read the other parts in this article series please go to:
- Shared Storage Considerations for Hyper-V (Part 1)
- Shared Storage Considerations for Hyper-V (Part 3)
In the first part of this article series, I discussed Hyper-V’s usage of Direct Attached Storage. Using Direct Attached Storage is acceptable for implementing basic server virtualization in a small organization, but is inadequate for use in medium and large sized organizations.
The reason why this is the case has to do with the very nature of server virtualization. Server virtualization uses a single physical server to host multiple virtualized workloads. The problem with doing so is that the cost of failure goes way up. In a physical data center for example, a server failure might be a big inconvenience, but it is rarely catastrophic. Server failures in a virtual data center are another story. If a host server fails then every virtual server residing on that host will also fail. When you consider that a single host might contain dozens of virtual machines you can begin to understand why it is so critically important to protect virtualization hosts.
So what does all this have to do with storage? Well, the only way to protect against a server level failure is to build a failover cluster. If a host within a failover cluster drops off-line then the virtual machines themselves are simply moved to another host that is still functioning. Of course virtual machine migrations can also occur even without a server failure. Often times for example, a virtual machine may be moved to another host in an effort to balance the host workload or in preparation for taking the host off-line for maintenance.
In the Windows Server 2008 and 2008 R2 versions of Hyper-V, the only way to provide virtual machine failover and live migration capabilities is to implement shared storage. Shared storage consists of a storage device that is treated as a local storage resource by all of the nodes in a failover cluster.
Unfortunately shared storage can be expensive to implement. In fact, the cost is one of the major barriers to entry for smaller organizations. Thankfully, Windows Server 2012 will do away with the shared storage requirements for Hyper-V (although shared storage will still be supported).
In the case of Windows Server 2008 and 2008 R2, building a failover cluster for Hyper-V means storing virtual machines on a cluster shared volume. As previously mentioned, the cluster shared volume is networked storage that is accessible to each node in the cluster. The reason why cluster shared volumes tend to be expensive to implement is because the storage must be seen as a local to each cluster node. This rules out connecting cluster nodes to file server storage (although doing so will be supported in Windows Server 2012). For the time being, your only options for implementing shared storage are to use either iSCSI or Fibre Channel.
As is the case for Direct Attached Storage, connectivity is far from being the only consideration that should be taken into account with regard to the storage unit. Other important considerations are the number of IOPS that the storage unit is capable of delivering, resilience to failure, and the bandwidth available for storage connectivity.
When it comes to storage bandwidth, higher bandwidth is obviously better. However, it is important to keep in mind that raw throughput is not always an accurate reflection of storage bandwidth. For example, iSCSI can be utilized over a ten gigabit Ethernet connection. Likewise, there is a flavor of Fibre Channel called Fibre Channel Over Ethernet that can also be used over ten gigabit Ethernet. If one were to only look at raw throughput then it would be easy to assume that Fibre Channel Over Ethernet and iSCSI could both outperform Fibre Channel because Fibre Channel communications are currently limited to 8 gigabits per second. However, Fibre Channel is actually the faster medium in spite of the fact that it has a lower raw throughput. The reason for this is that Fibre Channel Over Ethernet and iSCSI both require storage transmissions to be encapsulated into Ethernet packets. There is quite a bit of overhead associated with the encapsulation process and that overhead causes iSCSI and Fibre Channel Over Ethernet to be slower than Fibre Channel. Network cards with TCP/IP offloading capabilities can help to bridge the gap between the various technologies, but Fibre Channel still comes out ahead.
As previously mentioned, storage bandwidth is not the only consideration that must be taken into account with regard to building a cluster shared volume. IOPS and resiliency to failure are also major concerns. The RAID level used by the storage array directly impacts both of these factors. As a general rule RAID 10 (also called RAID 0+1) is the preferred RAID level because it delivers the highest IOPS while also protecting against hard drive failure.
Implementing a Cluster Shared Volume
The process for creating a cluster shared volume differs depending on what type of storage medium you are using and on whether you are using Windows Server 2008 or Windows Server 2008 R2. As a general rule however, you must begin the process by installing Windows onto each cluster node and then using the Server Manager to deploy the Failover Clustering Service. It is important that each cluster node be configured in an identical manner aside from its computer name and IP addresses.
Once Windows has been installed then the next step in the process is to use an initiator to establish connectivity to the shared storage. Each cluster node must use the same drive letter for the shared storage.
At this point you would open the Failover Cluster Manager and create the cluster. The cluster creation process is beyond the scope of this article since my primary focus is on storage.
Once the cluster has been created, you can select the cluster name within the Failover Cluster Manager and then click on the Enable Cluster Shared Volumes link. When you do, a new container named Cluster Shared Volumes will be created within the console tree. Now you must tell Windows to treat your shared storage as a cluster shared volume. To do so, simply select the Cluster Shared Volume container and then click on the Add Storage link found in the Actions pane. Windows will now ask you which disk you want to use as a cluster shared volume. Make your selection and click OK. The disk that you have selected now appears as a cluster shared volume.
Configuring Virtual Machines to Use the Cluster Shared Volume
After the cluster shared volume is in place the next step is to configure your virtual machines to use it. The first step in doing so is to install Hyper-V onto each cluster node. Once Hyper-V is up and running then you can begin creating virtual machines. As you create the virtual machines you must tell Hyper-V to store the virtual machines and their associated virtual hard disk files on the cluster shared volume. Remember, each cluster node should use the same drive letter for the cluster shared volume.
Believe it or not, merely storing the virtual machine files and the virtual hard disks on the cluster shared volume will not make the virtual machines fault tolerant. To achieve fault tolerance you must shut down the virtual machines (or place them in a saved state) and then take some steps to make the Failover Clustering Service aware of your virtual machines. To do so, open the Failover Cluster Manager and then select the Services and Applications container in the console tree. Next, click the Configure a Service or Application link found in the Actions pane. This will cause Windows to launch the High Availability Wizard.
The wizard’s initial screen asks you which service or application you want to configure for high availability. Choose the Virtual Machines option and then click Next. On the following screen select the check boxes that correspond to the virtual machines that you want to add to the failover cluster and click OK. Now just click Next and Finish. When you are done the virtual machines should be listed in the Failover Cluster Manager.
As you can see, using Hyper-V shared storage is essential to providing fault tolerance and live migration capabilities to Hyper-V in Windows Server 2008 and 2008 R2. In Part 3 I will conclude the series by discussing how the need for shared storage goes away in Windows Server 2012.
If you would like to read the other parts in this article series please go to: