Server Virtualization Assessment using Microsoft Assessment and Planning 3.1 Toolkit (Part 1)

If you would like to read the other parts in this article series please go to:

This is a three-part article on performing a server virtualization assessment using the Microsoft Assessment and Planning Toolkit 3.1 (MAP 3.1). Part I of this article will provide a high-level overview of a server virtualization assessment, and the type of data that you need to collect in order to perform such an assessment. The information presented in this article assumes that you have application servers deployed in a Microsoft Windows environment.

What Is A Server Virtualization Assessment?

In order to design and plan effective virtualization architecture, you must take the time to perform a thorough server virtualization assessment. This is a crucial step to undertake whether your server virtualization project has a smaller, local scope, or encompasses your entire enterprise. A server virtualization assessment involves capturing current configuration, performance, and environmental data for the legacy infrastructure that you are interested in virtualizing, assessing the data against a set of requirements and limitations for the virtualization infrastructure, and producing some key reports that will assist you to construct a detailed blueprint for the conversion of the legacy infrastructure to the virtualization infrastructure. At the conclusion of a server virtualization assessment, you should have defined:

  • Workloads (i.e., operating system and application stack) that are candidates for virtualization.

  • Workloads that are not candidates for virtualization because of hardware, performance, or application compatibility limitations.

  • Potential combinations of candidate workloads that provide optimum resource utilization within the performance boundaries of the physical server hardware and virtualization software (e.g., Windows Server 2008 Hyper-V).

  • Physical servers that can be repurposed as virtualization hosts during the conversion from the legacy infrastructure to the virtualization infrastructure.

  • Preliminary estimate of the reduction of consumables such as power, cooling, and rack space.

Once you complete a server virtualization assessment, you will also be in a position to define the impact that your virtualization project will have in decreasing the number of physical servers deployed in your environment. This is information that you can use to estimate the potential return on investment (ROI) of the project and which you can use to justify your virtualization project.

What Data Do I Need To Collect?

There are three major data sets that you need to support a server virtualization assessment. In particular, you need to collect:

  • Server hardware and software inventories

  • Server performance metrics

  • Environmental details

In addition, you will need to gather the workgroup, legacy domain, and Active Directory forest, domain, site, and IP subnet to physical location mapping configuration for your environment.

Hardware and Software Inventory

Determining which existing workloads are good candidates for virtualization depends on collecting specific hardware and software information for each server that is within the scope of your project. The physical hardware information that you need to collect includes Basic Input/Output System (BIOS) parameters, processor type, number of processor cores, physical memory configuration, number and type of network interface cards (NICs), disk storage details, USB devices, serial and parallel port devices, as well as any other specialized hardware components with current workload dependencies. The software information that you need to collect consists of the operating system and installed applications, as well as any software updates, hotfixes, or service packs installed on the physical server. In addition, you need to capture the list of services running on each server as well as the configuration information associated with each running service.

Performance Metrics

Once you have compiled the hardware and software inventory data for the physical servers that fall within your project scope, you need to collect performance metrics for each server. Specifically, you need to capture processor, memory, network, and disk performance parameters. Performance metrics must be collected over a sufficiently long time span that you capture cyclical peaks and troughs associated with normal application execution. The general recommendation is to capture data for a minimum of one month and to ensure that the capture period encompasses high-load events. In order to diminish the impact of capturing performance data on the server, make sure to set the collection interval to no less than every five minutes.  Please note that the performance parameters mentioned below are Microsoft Windows operating system counters.

The primary processor performance metrics that you need to collect are the percent processor time and the percent interrupt time for each processor. The percent of processor time reflects the processing power level that the workload requires on the server over time. The percent interrupt time reflects the time spent processing interrupts associated with devices and peripherals. This information will be used to determine the allocation of virtual processors to each virtual machine and workload combinations that optimize processor utilization on the virtualization host.

The memory performance metrics that you need to capture are the available memory bytes and pages per second parameters. The available memory bytes parameter represents the amount of physical memory in bytes available for allocation to a process; in other words, it is the amount of free physical memory. The pages per second parameter reflects the rate at which memory pages are read from or written to disk to resolve hard page faults. This information will be used to determine the allocation of memory for each virtual machine and workload combinations that optimize memory utilization on the virtualization host.

The network performance metrics that you need to collect are the total bytes per second for each physical network adapter. This information will affect the design of virtual networks and virtual machine connections to optimize the network load across multiple host adapters.

The storage performance metrics that you need to gather are the real-time and average values for read and write operations to each physical disk in the server. This information will affect the design of the storage subsystem to ensure that sufficient capacity and throughput is available to virtual machines.

Environmental Details

Environmental data that you collect as part of your server virtualization assessment includes information that can help you with project justification by demonstrating the cost reduction benefits of migrating to a virtualization infrastructure. The main data that you need to collect is essentially the power, cooling, and rack space cost associated with each physical server that is a virtualization target. Don’t forget to take into account storage, backup power devices, and any other peripherals associated with each server in your cost calculations. Once you have defined the virtualization candidates, the number of virtualization hosts, and the associated subsystems (such as storage) that are needed to implement the virtualization infrastructure, you can estimate the cost savings that should result from the reduction in physical servers and subsystems.

How Do I Use the Data?

In terms of the actual server virtualization assessment, the hardware and software inventory, and performance metrics are evaluated against a set of limitations derived from the virtualization environment in order to identify candidate virtualization workloads, potential combinations of workloads that when concurrently hosted will optimize virtualization host resource utilization, and the number of virtualization hosts that will be required. From a cost, service, and management perspective, it is recommended that you define two or three standard server configurations that will allow you to support small, medium, or large virtualization workloads. Defining the physical server configuration may take a couple of iterations in which you vary the hardware characteristics to optimize the number of virtualization hosts.

Given a starting set of physical virtualization host configurations, you will now be able to evaluate your physical server workload data against the hardware and performance limits derived from the virtual hardware capabilities of a Windows Server 2008 Hyper-V virtual machine, as well as the physical hardware configuration of a virtualization host. Hardware and performance limits are based on basic memory, disk space, processor, and network components. More advanced or specialized limits may include the presence of serial, parallel, or other hardware devices.

When undertaking the hardware assessment process, you will evaluate each targeted physical server against virtual machine hardware limits. If a target physical server fails to fall within these limits, you will most likely exclude the associated workload from the virtualization candidate pool, and the assessment proceeds to the next targeted physical server. If a targeted physical server complies with all the limits, then you can evaluate the associated workload and performance metrics against the defined performance limits.

It should be obvious by now that trying to perform a server virtualization assessment using manual means would be cumbersome and next to impossible for anything more than a handful of physical workloads. Luckily, the MAP 3.1 toolkit can assist you to automate the hardware, software, and performance data collection for your legacy environment, as well as assess the data against defined virtualization limits, and produce reports of workload virtualization candidates.


In Part 2 of this article, you will learn about the features of the Microsoft Assessment and Planning 3.1 toolkit that can assist you in performing a server virtualization assessment.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top