Building Virtual Environments Using HP’s Sizing Tool (Part 3)

If you would like to read the other parts in this article series please go to:


Sizing a virtual environment sometimes seems like a combination of art and science. However, with enough data, sizing calculations can move quickly to the science side of the equation and be much more accurate, especially if you can get application performance details. That’s where HP’s Unified Sizer tool comes in. Once downloaded, you can install this product onto your Windows machine. When done, you will find a series of sizing utilities installed.

Server role options – continued

This part kicks off with a look at the various services that an administrator can select for inclusion on a server. Note in Figure 1 below that it’s possible to individually configure the servers that are identified as being a part of the overall solution. Here, you can choose to include additional tools and services that will ultimately make their way to the generated bill of materials.

Figure 1: Server software services options

Speaking of the bill of materials, that’s what’s shown in Figure 2, which you can see below. This is the part that I find pretty cool as it can help take some of the guess work out of the scenario. You can see that the HP Sizer tool generates a bill of materials that includes list pricing for each component. While most customers end up actually paying far less than whats listed, the tool is able to provide organizations with at least some kind of ballpark cost to pin to the board. I have more thoughts on the bill of materials and the pricing that I will share later.

Figure 2: A Bill of Materials for the calculated solution

Pretty much every component that HP recommends in the sizing tool can be tweaked to completely customize the solution to individual needs. And evidenced by the image in Figure 3, an administrator can even choose the desired RAID level for the solution along with a protection level. In this case, you can see that the VMFS data stores alone would require more than 2,100 disks to achieve capacity and performance goals. You are also able to choose the array type – in this case, a 3PAR P9500 – and connectivity infrastructure.

Figure 3: Choose storage options

Even better, you are able to procure additional details regarding the configured solution, as shown in Figure 4 below. Here, you can see the raw totals. This is one of the areas where I start to have concerns about the proposed solution. I’ll talk about that in a bit. However, as a teaser, why would anyone buy 1612 146GB disks these days to meet the need? There are far superior options, but it may require implementing some HP and some third party equipment.

Figure 4: The proposed storage solution

Likewise, you’re able to get a look at the full host system configuration details, shown in Figure 5.

Figure 5: Host system configuration

Figure 6 below gives you a detailed look at the bill of materials for the recommended solution. As you can see, the bulk of the storage cost comes as a result of the need to by 2,138 146GB 15K RPM SAS disks at $1,285 each for a total of more than $2.7 million.

Figure 6: The storage bill of materials for the recommended solution

When you’re done reviewing the proposed solution, you can reset it and start over, save it, view it, or view the bill of materials.

Figure 7: Post-configuration options

Not always the best solution

As I mentioned before, while I believe that the HP sizing tool is a good tool, it’s far from perfect and I wouldn’t actually use it as the end-all-be-all for designing a complete environment. At best, I would use it for nothing more than a guideline or framework for discussion for the creation of a virtual environment, particularly where storage is concerned. As I used the tool I was reminded of the phrase “When all you have is a hammer, everything looks like a nail.” Obviously, the tool only considers HP hardware and there is certainly some benefit to buying all of your equipment from a single vendor.

Let’s look at another example that takes a more manageable approach to building a virtual environment. In this case, our fictitious company is planning to migrate 50 physical servers into a virtual environment. For demonstrative purposes, every server is identical, with dual quad core Zeon 5600 processors running at 2 GB. Each server experiences, on average, 15% processor utilization, uses around 1 GB of RAM to run services, and typically requires 160 IOPS and 35MB of disk transfer. Again, these figures are just for demonstration purposes and are not really real world.

Figure 8: The demonstration environment is very cookie-cutter

For everything else in the sizing tool, I’ve allowed the tool to use its defaults for calculations. The tool will use its own algorithms to build out the recommended environment and present it… with one exception.

As you can see in Figure 9, I’ve overridden the number of LUNS for VMFS to be 10 instead of just 1. You can also see in this figure that the tool has calculated that the solution will require 8,000 IOPS of disk performance and needs a storage network that can support 1,750 MB/second of throughput.

I’m also allowing the sizing tool to use 80% peak utilization for all resource metrics. This is the default setting and my goal here is to see how the tool reacts with information as close to defaults as possible.

Figure 9: Solution storage configuration

In Figure 10, you can see the server results from the defaults I provided. The calculator calls for a pair of ProLiant BL620c servers, each with 32 GB or RAM and a pair of disks for the local operating system, which, in this case, is vSphere. Given the input parameters, this makes sense. Of course, for a real-world deployment, I’d add much more RAM since RAM Is cheap. That said, the proposed configuration easily supports the workloads, but high availability would not be capable since one host could not support all fifty virtual machines. I’d bump the RAM to at least 96 GB and probably go well beyond that figure.

Figure 10: Physical server configuration details

So, the server configuration was reasonable, if a little underconfigured, but the storage array is where I think the solution is very weak. First, a 78 disk solution will support 8,000 IOPS, but only if you use RAID 10, which is the recommendation. If you need additional capacity and instead try to use RAID 5 or RAID 6, write IOPS will quickly become an issue. Further, you’re risking buying a whole lot of disk capacity to support IOPS needs, which is not a good long-term solution. If IOPS are needed, buy storage that meets that need. I’m a huge fan of the growing market for hybrid storage arrays, which provide plenty of capacity while also providing a whole lot of IOPS to support pretty much any reasonable workload.

Figure 11: The sizer’s storage calculations


In closing of this series, I see the sizing tool as an interesting exercise. I initially had high hopes for the tool, but these kinds of decisions must be made more broadly than with a single vendor tool.

If you would like to read the other parts in this article series please go to:

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top