vSphere Storage (Part 1) – A VAAI Primer

If you would like to read the next part in this article series please go to vSphere Storage (Part 2) – A VAAI Primer.

Introduction

Storage provides critical foundational support for a vSphere environment. It also happens to be one of the most expensive single infrastructure investments made by an organization. As such, ensuring that this investment is maximized to its fullest potential is generally of significant interest to an organization. At the same time, a lot of storage processing power goes unused.

To help companies maximize their storage investment and to allow them to offload some storage-related processing tasks from host servers, with the release of ESX/ESXi 4.1, VMware added a feature called VAAI (VAAI was previously a part of the vStorage API), which stands for vStorage APIs for Array Integration. This feature was significantly improved in vSphere 5.0.

VAAI offloads certain processor-intensive tasks from hosts to VAAI-enabled storage arrays. Such offloading can have a number of positive benefits for the organization:

  • Improved performance of host servers.
  • Potential ability to increase the virtual machine density of host servers since more processing power is freed up.
  • Much improved performance for certain storage operations. This is because VAAI can instruct storage to directly handle certain tasks without having to involve the network or the host and, as such, can avoid any latency issues that may be present on those resources.

VAAI offers a broad set of capabilities and storage vendors provide differing levels of support for the various features. As such, not all arrays are created equal when it comes to VAAI support.

So, what kind of functionality does VAAI actually bring, anyway? That’s what we’re going to discuss now, broken down by vSphere version. By the way, the various capabilities included in VAAI are known as primitives. You’ll see that terminology a lot as you read about VAAI.

VAAI limitations

Before we discuss the various primitives that comprise VAAI, let’s discuss some limitations of the technology:

  • The source and target VMFS volumes need to have the same block size.
  • The source VMDK file and the target VMDK file need to use the same provisioning type (thick, thin).
  • The source virtual machine can’t have any snapshots.
  • A source VMFS can’t have extents that all reside on different arrays.

In this article, we’re going to focus on the VAAI features included in vSphere 4.1. In Part 2, we’ll move on to discuss the features included in vSphere 5.

vSphere 4.1

For its initial full foray into the world of direct array based integration, VMware included support for three primitives. These are the features that provide administrators with the biggest impact.

Full copy

Imagine this scenario: You need to copy a massive, multi-terabyte file (perhaps a full virtual disk) from one location on your non-VAAI-enabled storage array to another location on the same array. This could be the result of a virtual machine cloning operation or the result of a Storage vMotion operation. In this traditional scenario, the following would take place:

  • The hypervisor initiates the copy operation.
  • The hypervisor begins the file copy operation.
  • The file is copied over the network to the host and then back down the network to the storage array. The hypervisor needs to read and write each and every block during the transfer.
  • When done, the hypervisor ends the file copy operation.

Although the end result is that the file is copied, this process suffers from a lot of inefficiency. First of all, the hypervisor has to read and write each and every block as it comes across the wire. This is the part of the process that results in the hypervisor having to expend its resources handling this workload while it continues to manage its virtual machine arsenal.

Further, this copy operation has a significant network impact for the entire duration of the copy process. That can have a negative impact on the rest of the environment.

Now, let’s suppose that you have a VAAI-enabled array that supports the Full Copy primitive. With this support, the hypervisor simply sends a single SCSI XCOPY command to the storage array instructing it to copy the file from the source location to a designated second location. And that’s the entirety of the hypervisor’s role in the operation.

This has a lot of benefits:

  • Less load on the hypervisor.
  • Faster copy operation.
  • Practically no impact on the network since the entire copy operation is handled inside the array.

Block zeroing

When a virtual machine is copied from one location to another, the entire file is copied, including blank space. vSphere copies the parts of the files actually containing data as well as the individual blocks that make up the blank space. For example, a file may be 250 GB in size, but only have 50 GB of data. That means that 50 GB of data needs to be copied, but time and cycles would be wasted copying the blank space.

Block zeroing makes this process a bit more streamlined. The host simply instructs the storage array to handle the zeroed blocks on its own, without involving the host. Again, the result is improved efficiency and faster operations..

Hardware assisted locking

vSphere 4.1 includes a third VAAI primitive known as hardware assisted locking. Hardware-assisted locking provides an alternative method to traditional SCSI reservations. The entire purpose behind SCSI reservations is to protect the VMFS metadata against being modified inadvertently due to multiple machines accessing it at the same time.

With the introduction of hardware assisted locking through VAAI, the use of SCSI reservations can be basically eliminated, which also eliminates instances of SCSI reservation conflicts. SCSI reservation conflicts can result in significant performance issues as hosts are forced to wait until the lock is released before they can carry out certain operations.

With hardware assisted locking, vSphere is able to use locks at a much more granular level, thus reducing the chance of a conflict taking place and, as a result, improving the overall performance of the environment. Specifically, with hardware assisted locking, only the virtual machine files that are related to a target virtual machine are locked, so the remaining virtual machines are not impacted by the lock.

You may recall that, with older versions of vSphere, there was concern and guidance about the number of virtual machines that should be stored in one VMFS. A big part of this guidance was due to the fact that the entire volume would be locked when intensive operations, such as vMotion, took place, thus adding the potential for a lot of delay and negatively impact the whole environment.

With newer versions of vSphere, which are more efficient to begin with, and with VAAI support, administrators don’t have to worry as much about the number of virtual machines stored in a VMFS since there is less chance of the entire volume being locked. This can help administrators make more efficient use of both their time and their storage.

Here are some other activities that result in metadata locking:

  • Creating a file or template.
  • Deleting a file.
  • Creating a VMFS datastore.
  • Creating a new virtual machine.
  • Expanding a VMFS datastore.
  • vMotion.
  • Creating a template.

Summary

With just these three primitives available in vSphere 4.1, administrators that have VAAI-enabled storage are able to truly push their storage investment to the limit and use it in a way that adds more efficiency to the overall virtual environment In Part 2 of this series, we’ll discuss VAAI and vSphere 5.

If you would like to read the next part in this article series please go to vSphere Storage (Part 2) – A VAAI Primer.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top