Storage performance is an important consideration for virtualized workloads in both on-premises and cloud environments. Beginning with Windows Server 2012, the Microsoft Hyper-V virtualization platform now provides the ability to provision virtual machines rapidly using technologies such as offloaded data transfer (ODX), which can use capabilities in storage systems to rapidly clone or create VMs to enable workload elasticity. The built-in ODX support in Windows Server 2012 and later ensures that your virtual machines can read and write to SAN storage at performance levels matching that of physical hardware, while freeing up the resources on the system that received the transfer. With storage a key resource for any cloud solution, these improvements make the Microsoft Hyper-V an effective platform for building clouds.
How ODX works
ODX is a key performance and scalability improvement introduced in Windows Server 2012 that revolves around storage, in particular when storing virtual machines on storage arrays. The following brief description of how ODX works has been adapted from my books “Introducing Windows Server 2012: RTM Edition” (Microsoft Press, 2012) and “Introducing Windows Server 2012 R2: Technical Overview” (Microsoft Press, 2013).
Offloaded Data Transfer is a feature of high-end storage arrays that enables ODX-capable storage arrays to bypass the host computer and directly transfer data within or between compatible storage devices. The result is to minimize latency, maximize array throughput, and reduce resource usage, such as CPU and network consumption on the host computer. For example, by using ODX-capable storage arrays accessed via iSCSI, Fibre Channel, or SMB 3.0 file shares, virtual machines stored on the array can be imported and exported much more rapidly than they could without ODX capability being present.
ODX uses a token-based mechanism to read and write data within and between storage arrays. When ODX is used, a small token is copied between the source and destination servers instead of routing data through the host (see Figure 1). So when you migrate a virtual machine within or between storage arrays that support ODX, the only thing copied through the servers is the token representing the virtual machine file, not the underlying data in the file.
Figure 1: How offloaded data transfer works in a Hyper-V environment.
ODX effects and use cases
The performance improvement when using ODX-capable storage arrays in cloud environments can be astounding. For example, instead of taking about three minutes to create a new 10 GB fixed VHD as was sometimes the case with Hyper-V hosts running Windows Server 2008 R2, the entire operation on a Hyper-V host running Windows Server 2012 R2 can be completed in less than a second.
Other VM operations that can benefit just as much using ODX-capable storage hardware include:
- Expansion of dynamic VHDs
- Merging of VHDs
- Live Storage Migration
ODX also can provide benefit in nonvirtualized environments, such as when transferring large database files or video files between servers.
Finally, the R2 release of System Center 2012 (particularly in VMM 2012 R2) supports ODX optimized virtual machine deployments.
Considerations when using ODX
The most important consideration when you are thinking about implementing ODX is to make sure first that your hardware and software meet all the necessary requirements for using ODX. This basically means that:
- Your storage array supports ODX (check the Windows Server catalog at http://windowsservercatalog.com).
- You have up to two hosts connected to no more than two storage arrays.
- Your hosts are running Windows Server 2012 or later.
- You are not using BitLocker Drive Encryption, Resilient File System (ReFS), Storage Spaces, Data Duplication, or dynamic volumes.
On the host, ODX can be used with Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI connections. Within the virtual machine, ODX can be configured for the SCSI, Synthetic FC or iSCSI controllers.
Additional hardware and software requirements are outlined at http://technet.microsoft.com/en-us/library/hh831628.aspx.
Once you’ve met the basic requirements for using ODX however, there are some other considerations that you should be aware of and which are described in the sections that follow.
ODX and file copy performance
Depending on the scenario under consideration, ODX won’t necessarily improve your file copy performance. The main performance boost that ODX does provide is that files can be copied with much less network traffic happening behind the scenes. If you want to simply copy files from one point to another within your infrastructure as fast as possible, then the fastest way to do this is simply to read the file on the source and write it to the target. Because ODX introduces additional round-trip checks to verify whether tokens for data chunks already exist on the target, simple file copies can sometimes take longer when ODX is used.
So if you’re copying a very large file (in the range of gigabytes) between a single host or virtual machine and a direct-attached ODX-capable LUN on a storage array and you find it’s taking a long time, try temporarily turning ODX off. This will typically result in greater network I/O but faster file copy speeds. On the other hand, if you’re copying ODX between two SMB shares on two separate hosts or virtual machines then you will likely experience faster copying times with ODX enabled than with it disabled.
ODX and virtual disk creation
The reason that creating fixed VHDs or VHDXs with ODX happens almost instantaneously is because the process of creating a fixed VHD doesn’t involve copying a file, it involves zeroing a file. ODX enables file zeroing to happen much faster than file copying because a zeroed chunk is simply replicated over and over instead of copying multiple chunks. One of the big benefits then of using ODX capable hosts and storage arrays in Hyper-V environments is that you can provision new virtual machines with blank terabyte-sized virtual disks almost instantaneously. This is particularly beneficial in hosted cloud environments where virtual machines are created and destroyed almost continuously.
ODX and VMM
System Center 2012 R2 Virtual Machine Manager and later also support a new API called CopyFile2 (also called Fast File Copy or SAN Copy) that can automatically speed up file copy by offloading using ODX if the attached storage array supports ODX. If the array doesn’t support ODX, VMM tries the SMB CopyChunk API which is generally used whenever the host is a clustered Scale-out File Server (SoFS) and the file copy is happening within the cluster. If CopyChunk can’t be used, VMM automatically reverts to traditional network file copy, and if that fails for some reason (for example because of transient issues with the underlying physical network) then VMM falls back to Background Intelligent File Transfer (BITS) which was the default method used by VMM 2012 SP1 for transferring data over the network. For more information one SAN Copy, see this link.
ODX and file allocation unit size
If you are using ODX on Hyper-V hosts for very large (e.g. terabytes) virtual disks (VHD or VHDX) stored on storage arrays (for example on CSV volumes of a Windows Server 2012 R2 Hyper-V host cluster) you need to make sure you format the NTFS file system on these virtual disks with the optimal file allocation unit size. The default allocation unit size when formatting an NTFS volume is 4K, but for very large virtual disks your best option is to use 64K for allocation unit size. This is particularly important for some arrays like those in the VMAX Enterprise Storage series from EMC as storage on these arrays is optimized around 64K boundaries. By using 64K as allocation unit size in such cases you will be aligning the file system size in your virtual machine with both the file system size on the host and the RAID stripe size on the storage array (i.e. all three are 64K). And since the internal block size of VHD is 2 MB and VHDX is 32 MB and both of these are multiples of 64K, everything will be aligned to ensure maximum performance of file copy. So to ensure ODX works properly in such scenarios, regardless of whether the virtual machines and/or the hosts are clustered or not, be sure to select 64K as the allocation unit size when formatting the NTFS file system on your virtual disks.
Additional resources on ODX
For more information about ODX support in Windows Server 2012 and later, see the article titled “Windows Offloaded Data Transfers Overview” in the TechNet Library.
For additional information on ODX support in Windows Server 2012 and later, see the topic “Offloaded Data Transfers” in the Windows Dev Center on MSDN.
For an examination of ODX performance with an HP storage array, see “Notes From The Field Using ODX With HP 3PAR Storage Arrays” on the Hyper-v.nu blog.
For an examination of ODX performance with a Dell storage array, see “Some ODX Fun With Windows Server 2012 R2 And A Dell Compellent SAN” on the Working Hard in IT blog.