If you would like to read the next part in this article series please go to What’s new in vSphere 5 (Part 2).
Introduction
New licensing model
I’m going to lead this “what’s new” discussion with a short look at VMware’s new consumption-based licensing model. I wish to preface this section by saying that the rumor mill indicates that VMware may change this model in order to make it less onerous for customers. It’s entirely possible that, by the time you read this article, VMware will have made official changes to the licensing model. For now, however, here’s the skinny:
The good news
VMware has eliminated CPU core count as a licensing metric. So, if AMD happens to release a 1,024 core behemoth, you won’t need to worry about vSphere licensing as a processor-based limiting factor.
The bad news
VMware has replaced core count with a new metric called vRAM. Each edition of vSphere 5 includes entitlements for a particular amount of vRAM per license. vRAM usage is the aggregated total virtual memory assigned to all virtual machines in the vCenter-managed cluster. So, if you have two vSphere 5 hosts, each with 96 GB of virtual RAM and a total of four running virtual machines, each with 4 GB of virtual RAM assigned, you have 192 GB of physical RAM, but only 16 GB of vRAM in use.
Each license for each edition of vSphere includes certain vRAM entitlements, which are outlined below:
- Essentials. 24 GB
- Essentials Plus. 24 GB
- Standard. 24 GB
- Enterprise. 32 GB
- Enterprise Plus. 48 GB
It should be noted that VMware is also removing the physical RAM constraints that were present in all vSphere editions except Enterprise Plus. Previously, vSphere servers – except those running on Enterprise Plus – were limited to 256 GB of physical RAM. This is no longer the case and you can pack in as much physical RAM as you like without hitting a licensing barrier. However, as you actually begin to use that RAM, you will start counting vRAM – monitored by vCenter – and will need to ensure that you have enough vSphere licenses to cover the amount of vRAM in use.
You do not need to monitor vRAM usage on a per-host basis. vRAM is considered a pooled entitlement meaning that, for example, if you have two hosts, each with 192 GB of physical RAM and each sporting two processor sockets, and you buy a total of four Enterprise Plus licenses (one for each socket), you can use up to 192 GB of RAM across both hosts in aggregate. If you wanted to, you could use all 192 GB of RAM on virtual machines running on one host or, more likely, split the workload among both hosts as long as the total RAM in use on your virtual machines did not exceed 192 GB.
The rumor mill
Rumor has it that VMware will soon announce some significant changes to this controversial vRAM licensing scheme that was announced with the vSphere 5 release.
Here’s what the rumor mill is saying about what changes we might see (all values are per socket/license):
- Essentials: 32 GB vRAM entitlement (was 24 GB)
- Essentials Plus: 32 GB vRAM entitlement (was 24 GB)
- Enterprise: 64 GB (was 32 GB)
- Enterprise Plus: 96 GB (was 48 GB)
Further, rumors indicate that VMware will cap at 96GB of RAM on a single VM the amount of RAM that counts against the vRAM pool. So, if you have a VM that has 128GB of RAM assigned to it, only 96GB will count in vRAM calculations used for licensing purposes. With the original scheme, a massive VM was no longer financially feasible but this change brings thing back to reality a bit. The original worry was that large companies looking to virtualize tier 1 applications would end up paying a substantial sum in VMware licenses to do so.
Again, this is all rumor at this point. If it actually comes to fruition, I will update this article in a blog post here at virtualizationadmin.com.
ESX is really and truly gone
vSphere 5 is an ESXi-only release. There is no more ESX edition of the product. With previous editions, you were able to choose whether you wanted ESX or the smaller, nimbler, more secure ESXi.
Increased virtual machine maximums
As is often the case when VMware releases a major new edition of ESX/ESXi/vSphere, you can now build ever-larger virtual machines due to an increase in the maximum virtual hardware that can be assigned to a virtual machine. Again, some of the maximums are dependent on specific editions of vSphere so if you’re using Standard, you won’t be able to scale like you would if you were using Enterprise Plus.
That said, vSphere 5 brings a lot to the table by enabling even the most intensive tier 1 applications to be virtualized by allowing up to 32 vCPUs to be assigned to a single virtual machine when using Enterprise Plus. The Standard and Enterprise editions of vSphere 5 support up to eight vCPUs per virtual machine.
It’s useful to understand just how scalable things can be with vSphere 5. The table below gives you a look at some of the key maximums found in both individual virtual machines for each of three vSphere editions as well as key maximums information for vSphere hosts themselves. I will greatly expand on this information in a future article in this series.
VM-based |
4.0 |
4.1 |
5.0 |
VCPUs |
8 |
8 |
32 |
RAM |
255 GB |
255 GB |
1 TB |
Host-based |
4.0 |
4.1 |
5.0 |
Logical CPUs/host |
64 |
160 |
160 |
vCPUs/host |
512 |
512 |
2048 |
VMs/host |
320 |
320 |
512 |
RAM/host |
1 TB |
1 TB |
2 TB |
Table 1: vSphere maximums
New virtual machine version
Many of the hardware improvements I mentioned are available because VMware has introduced a new version of their virtual machine format – version 8. In addition to enabling a number of new maximums, version 8 also brings with it 3D graphics capability to allow support for Windows Aero and support for high speed USB 3.0 devices. These new hardware capabilities will extend the potential use cases for vSphere, particularly with regard to desktop scenarios.
Guest operating system support enhancements
While VMware has long been the leader when it comes to the breadth and depth of supported operating systems, vSphere 5 takes it to a whole new level with the introduction of the ability to support Mac OS X Server 10.6-based virtual machines on the platform.
But there’s a major catch.
You can only run Mac OS X-based workloads under vSphere when vSphere is running on Apple-labeled hardware (as per Apple’s pretty strict licensing). So, I guess this is a great thing if you want to run your data center on end-of-life Apple Xserve hardware? Otherwise, I’m not sure what kind of enterprise-grade data center hardware companies would expect to purchase from Apple.
VMFS 5
With vSphere 5, VMware has significantly overhauled the VMware File System (VMFS) and introduced some compelling new capabilities. First and foremost, anyone that’s been working with VMFS for very long will know that the maximum size of a single VMFS extent has been 2 TB for a very long time. As you needed to increase the amount of space in a VMFS store, you needed to use extents to do so – again, each one being 2 TB in size. With VMFS 5, the maximum size of an extent is now 64 TB – a 32-fold increase.
In addition to making things easier to manage by increasing the extent size, VMware has removed one key decision from the VMFS volume creation process – block size. Under older versions of VMFS, administrators needed to actively decide on a block size for the VMFS store in 1, 2, 4 or 8 MB blocks. Large block sizes were necessary for large volumes. Under VMFS 5, administrators can simply us the 1 MB blocks for any size VMFS volume. This will prove to be a huge boon to administrators that have been burned in the past that have tried to grow volumes with too-small block sizes. In order to work around this issue in the past, many administrators (myself included) opted to simply use an 8 MB block size for all VMFS volumes, regardless of size.
Likewise, VMFS 5 has introduced smaller sub-blocks, too. In previous versions of VMFS, sub-blocks were 64 KB in size meaning that a 2K file would consume a minimum of 64 KB of disk space. With VMFS 5, this sub-block size has been reduced to 8 KB. Further, very small files – those that are 1KB and smaller – aren’t even stored in the file system anymore. They’re actually stored in the file descriptor area of the metadata and it’s not until the file grows beyond 1 KB in size that it uses block-based disk space.
Individual VMDK files in a VMFS store are still limited in size to 2 TB – 512 bytes. vSphere 5 does not allow you to create VMDK files that are larger. However, if you use raw device mapping (RDM) instead, you won’t have to worry about that limit and you will enjoy up to 64 TB of disk space under vSphere 5.
Finally, VMFS 5 allows you to store more individual files on the volume. Previous VMFS versions allows around 30,000 files to be stored. VMFS 5 increases this limit to more than 100,000 files.
Upgrading to VMFS 5
VMware has indicated that administrators can upgrade in place their existing VMFS stores to VMFS 5 in a completely non-disruptive way, but there are a couple of things to note:
- The block size will not change. If you upgrade an existing VMFS store, it will retain its existing block size (i.e. 8 MB). It takes a full wipe and rebuild to change the block size.
- Many administrators will opt for Storage vMotion. Many are (rightly) loathe to do an in-place upgrade of anything, lest it go horribly wrong. As such, many administrators will opt to create new VMFS 5 storage with 1 MB blocks and then use Storage vMotion to individual migrate virtual machines to the new VMFS. I strongly suspect that this method will be considered a best practice.
Summary
By now, you’re probably seeing a lot of new value in vSphere 5. I will continue this overview in Part 2 of this series.
If you would like to read the next part in this article series please go to What’s new in vSphere 5 (Part 2).