A First Look at Hyper-Vs Virtual Fibre Channel Feature (Part 1)

If you would like to read the next part in this article series please go to A First Look at Hyper-Vs Virtual Fibre Channel Feature (Part 1).

Introduction

In spite of the rapid popularity of server virtualization, there are some types of physical servers which have historically proven to be difficult or impossible to virtualize. Among these servers are those that have dependencies on Fibre Channel storage. Although Hyper-V has long been able to connect to Fibre Channel storage at the host level, there has been no provision for directly connecting virtual machines to Fibre Channel storage. This has all changed in Hyper-V 3.0 thanks to a new feature called Virtual Fibre Channel. This article discusses the benefits and features of virtual Fibre Channel.

Why Use Virtual Fibre Channel?

The greatest benefit to using virtual Fibre Channel is that it makes it possible to virtualize workloads that could not previously be virtualized due to their dependency upon Fibre Channel storage. Virtual machines are now able to directly access Fibre Channel storage in the same way that they could if the operating system were running on a physical server.

Of course storage accessibility is not the only benefit to using virtual Fibre Channel. This technology also makes it a lot more practical to create guest clusters at the virtual machine level.

Some administrators might be understandably reluctant to use the virtual Fibre Channel feature. After all, Hyper-V has long supported SCSI pass through, which is another mechanism for attaching a virtual machine to physical storage. Although SCSI pass through works, it complicates things like virtual machine backups and migrations. That being the case, I wanted to make sure to say up front that when properly implemented virtual Fibre Channel is a first-class Hyper-V component. Contrary to rumors, you can perform live migrations on virtual machines that use virtual Fibre Channel.

Virtual Fibre Channel Requirements

There are a number of different requirements that must be met prior to using the virtual Fibre Channel feature. For starters, your Hyper-V server must be equipped with at least one Fibre Channel host bus adapter (using multiple host bus adapters is also supported). Furthermore, the host bus adapter must support N_Port ID Virtualization (NPIV). NPIV is a virtualization standard that allows a host bus adapter’s N_Port to accommodate multiple N_Port IDs. Not only does the host bus adapter have to support NPIV, but NPIV support must be enabled and the host bus adapter must be attached to an NPIV enabled SAN.

The requirements listed above are sufficient for allowing Hyper-V to access your Fibre Channel network. However, your virtual machines must also support Fibre Channel connectivity. This means that you will have to run a supported operating system within your virtual machines. If you want to connect a virtual machine to virtual Fibre Channel then the virtual machine must run Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012. No other guest operating systems are currently supported for use with virtual Fibre Channel.

In case you are wondering, Hyper-V 3.0 does not allow virtual Fibre Channel LUNs to be used as boot media.

Live Migration Planning

As previously stated, the Hyper-V live migration feature is compatible with virtual Fibre Channel. However, facilitating live migrations does require a bit of planning. As you would expect, the main requirement is that each Hyper-V host to which a virtual machine could potentially be live migrated must have a compatible host bus adapter that is connected to the SAN.

The live migration process is made possible by the way that Hyper-V uses World Wide Names (WWNs). Hyper-V requires two WWNs to be applied to each Fibre Channel adapter. As you are no doubt aware, migrating a virtual machine to another Hyper-V host requires Hyper-V to eventually hand off storage connectivity to the destination host. If each host were only configured with a single WWN then Fibre Channel connectivity would be broken during the hand off process. However, the use of two distinct WWNs on each adapter solves this problem.

When the live migration process is ready to hand over storage connectivity, it releases one WWN, but not the other. The destination host establishes Fibre Channel connectivity for the VM by using the WWN that was released by the original host machine. At that point, both the source host and the destination host maintain Fibre Channel connectivity, but do so through two different WWNs. Once connectivity has been established by both hosts, the source host can release its other WWN. This approach allows the virtual machines to maintain Fibre Channel connectivity throughout the live migration process.

Multi Path I/O

Another networking technology that is compatible with virtual Fibre Channel is multipath I/O (MPIO). Multipath I/O is a storage technology that is designed to provide continuous connectivity to a storage resource by routing network traffic through multiple network adapters.

Multipath I/O is used as a way of providing fault tolerance and performance enhancements in SAN environments. The basic idea is that multipath I/O prevents any of the SAN components from becoming a single point of failure. For example, a server that is connected to SAN storage might use two separate host bus adapters. These adapters would typically be connected to two separate Fibre Channel switches, before ultimately being connected to a storage array. The array itself might even be equipped with redundant disk controllers.

Multipath I/O improves performance because storage traffic can be distributed across redundant connections. It also provides protection against component level failures. If a host bus adapter were to fail for example, the server would be able to maintain storage connectivity through the remaining host bus adapter.

Hyper-V 3.0 is actually very flexible with regard to the way that MPIO can be implemented. The most common implementation involves using MPIO at the host server level. Doing so provides the Hyper-V host with highly available storage connectivity. The virtual machines themselves would be oblivious to the fact that MPIO is in use, but would be shielded against a storage component level failure nonetheless.

Another way in which MPIO can be used is to configure virtual machines to use MPIO directly. Virtual Fibre Channel is based on the use of virtualized Fibre Channel Adapters within virtual machines (I will talk more about these adapters in Part 2). That being the case, you can configure virtual machine level MPIO by creating multiple virtual Fibre Channel adapters within a virtual machine. Keep in mind however, that the virtual machine’s operating system must also be configured to support MPIO.

Virtual Fibre Channel Technology

In the previous section, I briefly mentioned the idea of virtual Fibre Channel Adapters within a virtual machine. A similar component that I wanted to go ahead and mention right now, but will talk more about in the next article in this series is a virtual SAN.

Normally a physical Fibre Channel port connects to a SAN. However, a single host server can contain multiple Host Bus Adapters, and each host bus adapter can have multiple ports. This is where virtual SANs come into play. A virtual SAN is a named group of ports that all attach to the same physical SAN. You can think of a virtual SAN as a mechanism for keeping track of port connectivity.

Conclusion

In this article, I have introduced you to the Hyper-V virtual Fibre Channel feature and have talked about its requirements and limitations. In the next article in this series, I plan to discuss virtual SANs in more detail and show you how to actually implement virtual Fibre Channel.

If you would like to read the next part in this article series please go to A First Look at Hyper-Vs Virtual Fibre Channel Feature (Part 1).

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top