Volume and File System Considerations for Windows Server 2012 Hyper-V


Windows Server 2012 contains a staggering number of new features – more so than any Windows operating system in recent memory. Although it has been Hyper-V that has gotten most of the attention, Microsoft has also made a tremendous number of improvements to the way that Windows Server 2012 is able to work with your storage. This article lists some of the more noteworthy storage improvements and general considerations that should be taken into account when implementing Windows Server 2012 and/or Hyper-V.

In Windows Server 2012 and Hyper-V 3.0, Microsoft provides a dizzying array of storage options that you can use for your servers. That being the case, I thought it might be fun to break down the options that are available to you and the pros and cons of each option.


As was the case with Windows Server 2008 and 2008 R2, there are two different types of partitions that can be written to physical disks. These include Master Boot Record (MBR) and GUID Partition Table (GPT).

At first the fact that Windows Server 2012 offers exactly the same types of partitions as its predecessor did probably seem unworthy of consideration. However, the choice of a partition style is becoming increasingly more important than it was in the days of Windows Server 2008 were even Windows Server 2008 R2.

The MBR partition table has been the de facto standard since the 1980s. The problem with this type of partition table however, is that its structure limits the maximum size of the partition to roughly about 2 TB. A few years ago when Windows Server 2008 was released, a 2 TB drive was considered large. Today there are hard disks being sold that are twice that size. The helium filled drives that are soon to be available should substantially increase disk capacity even further.

GPT partitions are ideally suited to these large drives, because GPT has a theoretical maximum raw disk capacity of about 18 EB.

GPT support has actually existed in the Windows operating system ever since the days of Windows XP and Windows Server 2003. That being the case, it seems fair to ask why GPT has not been more widely adopted.

For a long time, there simply wasn’t a need for GPT. MBR was perfectly adequate for the hard disks of the time, and Windows defaulted to using MBR. More importantly however, many versions of Windows are unable to boot from a hard disk that uses a GPT partition. GPT is a 64-bit partition, so none of the 32-bit editions of Windows support booting from GPT disks. Microsoft first started providing the ability to boot from GPT disks in Windows Server 2008 and Windows Vista SP1. Even then however, a 64-bit Windows operating system was required, as was EFI.

File Systems

When it comes to the Windows file system, NTFS has been the file system of choice for Windows servers from the very beginning. Even though NTFS has been updated several times over the years, there was still a lot of room for improvement. That being the case, Microsoft introduced a new file system and Windows Server 2012 called ReFS.

The ReFS file system is actually based on NTFS (which still exists in Windows Server 2012 by the way), but is designed to improve reliability and resilience. There are several different ways in which the new file system accomplishes this increased reliability and resilience, but it is worth noting that the only way to get the full benefit of the new file system is to use it in conjunction with Windows Storage Spaces.

Windows Storage Spaces uses redundant disks to protect against read and write failures. For example, if a read failure were to occur then Windows is able to simply read an alternate copy of the data from a redundant disk. Likewise, if a write failure (or even a full-blown disk failure) were to occur then the write operation can be redirected to another disk.

Another way that the ReFS file system provides resiliency is that it actually protects against data loss resulting from power failures. To see how this works, imagine that an update is being made to a file and that the power goes out in the middle of the update. If the file system were NTFS then there would most likely be data loss as a result of the power failure. This data loss occurs because the write operation overwrites the existing data on the disk. When the power goes out, the write operation is incomplete and only fragments of the original data remain.

The ReFS file system does things differently. It uses an allocate on write function rather than directly overwriting existing data. The basic idea is that if metadata needs to be updated then the new metadata is written to an unused portion of the disk rather than overwriting the original metadata. That way, if a power failure occurs and the write operation is disrupted then the original metadata still remains because it has not been overwritten. This helps to protect the file system against corruption.

When it comes to writing data to the disk, the ReFS file system uses an allocate on write method that is very similar to the one that uses for writing metadata. The difference however, is that the data is also check summed as it is written to disk.

Microsoft has a name for this type of data write. They call it an integrity stream. It is worth noting however, that integrity streams tend not to be compatible with some database applications because some databases need control over the way that data is written to disk.

Another way that the ReFS file system is more reliable than NTFS is in the way that it automatically detects and repairs corruption. Metadata and integrity stream data are periodically checked by background processes for accuracy. Depending upon how the storage is set up, data may reside on multiple physical disks. These redundant copies of the data can be used to validate one another (checksums are also used for validation). If any data is found to be bad then redundant data is used to repair the bad data.

Finally, the ReFS file system protects your data through fault isolation. If volume corruption is detected then Windows will automatically isolate the corrupt portion of the disk, thereby making sure that it is not used again.

Virtual Hard Disks

Another area in which Microsoft has made changes with regard to storage technology is virtual hard disks. The VHD format is still around, but Microsoft has introduced a new virtual hard disk type known as VHDX.

VHDX offers all around better performance than VHD based virtual hard disks, especially when it comes to dynamic disks and differencing disks. More importantly, VHDX does not suffer from the 2 TB size limitation found in VHDs. Virtual hard disks based on VHDX can be up to 64 TB in size.


As you can see, Microsoft has been working hard to improve storage in Windows Server 2012. Windows Storage Spaces, the ReFS file system, and the new VHDX virtual hard disks should go a long way toward making storage more flexible and reliable.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top