vSphere Storage (Part 2) – A VAAI Primer

If you would like to read the first part in this article series please go to vSphere Storage (Part 1) – A VAAI Primer.

Introduction

Storage provides critical foundational support for a vSphere environment. It also happens to be one of the most expensive single infrastructure investments made by an organization. As such, ensuring that this investment is maximized to its fullest potential is generally of significant interest to an organization. At the same time, a lot of storage processing power goes unused.

To help companies maximize their storage investment and to allow them to offload some storage-related processing tasks from host servers, with the release of ESX/ESXi 4.1, VMware added a feature called VAAI (VAAI was previously a part of the vStorage API), which stands for vStorage APIs for Array Integration. This feature was significantly improved in vSphere 5.0.

In Part 1 of this series, you learned about the VAAI features that shipped with vSphere 4.1. In this article, you will learn about the VAAI features that shipped with vSphere 5.0.

Thin Provision Stun (block)

This API was actually added in vSphere 4.1, but was not documented and only a few storage vendors supported it at that time. With the release of vSphere 5, Thin Provision Stun is a fully supported primitive ready to be used by storage vendors and customers to help keep their storage assets running in tip-top shape.

Thin provisioning is a way that storage administrators can make more efficient use of storage assets. Many administrators tend to overprovision storage in anticipation of future needs. In some cases, this storage is never fully used, resulting in all that overprovisioned storage simply going to waste. With thin provisioning, an administrator can provision, for example, a 100 GB volume for a new Windows server. That server volume may require only 40 GB for now, though. That means that 60 GB would go wasted for the time being.

With thin provisioning, that 60 GB of allocated – but empty – space can be used for other purposes. In this way, thin provisioning be a huge boon for organizations that can’t perfectly plan storage workloads.

But, there is a downside. With thin provisioning, an opportunity for out of disk space conditions can be real since it becomes possible to actually overallocate physical storage. When space runs out and a virtual machine then requests additional space that’s allocated but not yet used by that VM, the VM will simply crash. Basically, the VM is attempting to write data to storage that it believes it has, but it doesn’t and the hypervisor and the VM handle the situation poorly.

Not good.

With Thin Provision Stun enabled, when one of these special out of space conditions occurs, the VMs are “stunned” – or paused – while the administrator is greeted with an error message requesting further instructions. This process provides the administrator with an opportunity to add additional physical space to the depleted volume in a way that maintains the integrity of the workload.

This is what I consider a very reactive proactive process. If you’re carefully monitoring storage, this situation should never occur, but if it does, it’s nice to know that VAAI’s got you covered.

Thin Space Reclamation (block)

As mentioned in the previous example, as a thinly provisioned volume needs additional space, additional blocks are allocated to that volume as long as space is available. This process continues until the volume reaches the maximum size as specified by the administrator when the virtual disk was created.

Thin provisioning space allocation isn’t just a one way street, though. The previous example implies that servers just continue to grow unabated. In many cases, however, data may be removed from a volume. In these cases, unless specific steps are taken, that freed up space could go to waste and no longer be a part of the thin provisioning pool.

That’s where thin space reclamation comes in. As a virtual machine’s hard disk starts to use less space, the hypervisor can instruct the array to begin to reclaim that space for other purposes.

Let’s consider a real world example in vSphere. vSphere performs a number of operations using its VMFS file system. Inside VMFS are individual files representing virtual hard disks, snapshots and more. Now, suppose you delete a virtual machine that’s in a VMFS. That’s space that can be used by another virtual machine or for other uses. The VAAI-enabled space reclamation, you’re assured that this freed up space is returned to the overall pool for reuse and handled at the hardware level.

Note that this API is also known as UNMAP.

Full Copy (NFS)

Although NFS has been growing in popularity for use in vSphere, none of the VAAI primitives introduced in vSphere 4.1 worked with NFS-based volumes. However, with vSphere 5, a number of NFS APIs were introduced, including the Full Copy API.

Although the Full Copy API for NFS is sometimes considered the NFS version of block-level Full Copy primitive described in part 1 of this series, there are some differences. However, at a basic level, it’s an accurate statement. However, with NFS volumes, the NFS Full Copy API is not used during Storage vMotion operations; with the block level Full Copy API, the API is engaged during a Storage vMotion Operation.

With vSphere 5.1, this API was updated to include support for array-based snapshots. This was added primarily to support VMware View.

Space Reservation (NFS)

In many ways, the VAAI primitive are about helping administrators manage thin provisioning and save space in the datastore. However, vSphere isn’t always about space conservation. In fact, in some cases, administrators want just the opposite. For example, if you consider virtual machine creation in block-based VMFS, there’s an opportunity to create eager zeroed thick virtual disks. With this disk type, if an administrator specifies a 100 GB volume, vSphere will proactively zero out (prepare) all 100 GB of space ahead of time. This can result in slightly better performance over time.

The NFS Space Reservation API brings this ability to NFS volumes.

Extended Statistics (NFS)

VMware invented block level VMFS and, as a result, the vSphere hypervisor has complete visibility into volume level details. This information can be critically important to the proper functioning of your vSphere environment. However, with NAS-based VMFS, vSphere is simply a guest on the existing NFS volume. Another system is responsible for managing the volume details, making it more difficult for the hypervisor to get vital information from it.

That’s where the Extended Statistics for NFS API comes in.

With this API, the administrator is able to gain critical insight into, for example, thin provisioning status and levels for VMDK files stored on NFS mounts. Before this, getting this kind of information was a much more difficult process and required a lot of leg work.

Additional consideration

It should be pointed out that many of these APIs require VMFS 5 in order to operate. Further, bear in mind that, if you’re using unpatched vSphere 5.0, you may run into minor issues. After all, VAAI is still software and software can be buggy!

Summary

With that, our discussion about the various VAAI primitives is complete. Bear in mind that VAAI brings to organizations the potential for significant business benefits through better space utilization and much faster operations (i.e. full copy can be 5 to 10 times faster with VAAI than without).

If you would like to read the first part in this article series please go to vSphere Storage (Part 1) – A VAAI Primer.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top