Back to basics (Part 1): Virtualization 101

If you would like to read the other parts in this article series please go to:

Introduction

Today’s modern Information Technology infrastructure operates in a radically different way than it did ten years ago. Although the data center may have a similar look in some ways – you still have servers and storage, for example – as you peel back the layers, the way that data center is operated has evolved in a significant way.

Before he concept of virtualization was extended to the x86 server market, when organizations needed a new service, the deployment of that service started with the purchase, installation and configuration of what could be pretty expensive pieces of hardware. In those days, individual servers were sized to accommodate peak loads. After all, you didn’t want the end user or customer experience to suffer because a server, for example, had too little RAM or too few disks. However, although servers were sized for peak loads, typical/average utilization for most resource components – processors, RAM and storage – felt far short of the maximum provided by the hardware. In other words, organizations sized each individual workload at peak and did so across all services.


Figure 1: Each server’s resource usage follows a similar pattern

At the same time, the technology landscape was booming and new services being brought on at a fast and furious pace. In order to accommodate new workloads, individual servers were purchased for each of those workloads in order to avoid the potential for conflict between software or resources. Thus, the phrase “server sprawl” was born.


Figure 2: Server sprawl results in a lot of waste

As you add things up, each of these servers carries a price tag, which, at the time, was not insignificant. On top of that, as each server was added to the data center, the organization had to assume new power and cooling costs. Further, given the pace of growth, data center space was a rare commodity. Racks were being added at a furious pace while companies struggled to keep up with demand.

When you step back and look at it, companies were deploying servers that would not run at peak capacity and, with every new service, adding new costs in two way: First, each hardware device carried with it ongoing capital costs due to the need to ultimately replace that device; Second, the company’s ongoing operating budget had to be adjusted for account for new power and cooling costs.

In 2001, a company that had been around for three years released its first data center-focused product. That company, VMware, released version 1.0 of a product they called ESX (an acronym for Elastic Sky X). Although virtualization had been around in other forms, ESX marked the first serious successful attempt at x86 virtualization. I’m not going to talk much about VMware specifically yet; that will come later in this series. Suffice it to say, ESX and the rise of other hypervisor products has transformed IT as we know it.

Types of virtualization

In this series, I’m going to be focusing primarily on the kind of virtualization that makes companies like VMware run – overall x86 server virtualization. However, there are a number of different kinds of virtualization options out there. I’ll touch briefly on some of them here. Although these are different kinds of virtualization, these virtualization types are generally included in people’s x86 server virtualization plans.

  • Network virtualization. VLANs – virtual networks – have been around for a long time. A VLAN is a group of systems that communicate in the same broadcast domain, regardless of the physical location of each node. By creating and configuring VLANs on physical networking hardware, a network administrator can place two hosts – one n New York City and one in Shanghai – on what appears to these hosts to be the same physical network. The hosts will communicate with one another under this scenario. This abstraction had made it easy for companies to move away from simply using physical connections to define networks and be able to create less expensive networks that are flexible and meet ongoing business needs.
  • Application virtualization. Virtualization is all about abstraction. When it comes to application virtualization, traditional applications are wrapped up inside a container that allows the application to believe that it is running on an original supported platform. The application believes that it has access to the resources that it needs to operate. Although virtualization applications are not really “installed” in the traditional sense, they are still executed on systems as if they were.
  • Desktop virtualization. Desktop and server virtualization are two sides of the same coin. Both involve virtualization of entire systems, but there are some key differences. Server virtualization involves abstracting server-based workloads from the underlying hardware, which are then delivered to clients as normal. Clients don’t see any difference between a physical and virtual server. Desktop virtualization, on the other hand, virtualizes the traditional desktop and moves the execution of that client workload to the data center. Those workloads are then accessed via a number of different methods, such as thin clients or other means.

What is x86 virtualization?

x86 virtualization encompasses various methods by which hardware resources are either abstracted or emulated in ways that allow for multiple virtual machine instances to share a common hardware platform. The x86 virtualization layer generally consists of a software layer – called the hypervisor – that sits on top of and controls access to the underlying hardware resources.

Think of it this way: When you install Windows Server 2008 R2 on a physical server, you’re installing an operating system that will host multiple applications. These applications expect the operating system to manage the hardware layer. After all, Microsoft Word isn’t going to be responsible for direct hardware access; instead, Word makes system calls that instruct the operating system to act on its behalf. Figure 3 gives you a look at this traditional computing model.The arrows in Figure 3 demonstrate the communications that take place.


Figure 3: Traditional computing model

Regardless of how you proceed, applications are still going to expect to see a native operating system hosting them. You’re not, for example, going to be able to yank Windows out from under Word and expect the application to run. But, there has to be a way to be able to make better use of the underlying hardware. As I mentioned before, if we imagine that Figure 3 is a server, its average utilization is pretty low, sometimes around 10%. So, how do we increase this utilization while ensuring that applications still have their own boundaries within which to operate?

We deploy a hypervisor software layer, as shown in Figure 4.


Figure 4: The hypervisor

Look closely at Figure 4. You’ll notice that the original operating system layer has been replaced. Now, this Hypervisor layer sits between the actual hardware resources and each individual running copy of an operating system. Each OS runs inside a separate virtual machine. These individual virtual machines gain access to the hardware layer only through calls to the hypervisor layer, which is responsible for resource allocation.

The key word here is abstraction. Virtualization in most forms involves some kind of abstraction. With network virtualization in the form of VLANs, network switches are VLAN-aware through software-based abstraction code. In x86 server virtualization, the hypervisor layer abstracts physical resources in order to enable the creation of multiple virtual machines that share these same resources.

Virtualization benefits

In a previous section, I hinted at some of the benefits that can be had with different kinds of virtualization but did not discuss the benefits you can gain from overall x86 virtualization. There are many and these are the reasons that x86 server virtualization is predicted to become the norm rather than the exception by the end of 2011. In other words, full virtualization will surpass the 50% mark. Here are two benefits of virtualization. More will be explored in Part 2.

Workload separation

One of the reasons that the term “server sprawl” was coined was that servers were popping up everywhere to serve a single purpose. The reason: Windows doesn’t always play nice when multiple workloads are being served. The beauty of x86 virtualization is that workloads can still be separated but rather than being placed on separate physical servers, these same applications are installed inside individual software containers known as virtual machines. Inside this container, administrators install an operating system such as Windows, just as they would with physical hardware. The hypervisor is responsible for ensuring that each software container has the resources that the operating system expects to see. For example, Windows needs access to RAM, disk-based storage resources and a processor in order to operate. Further, the hypervisor makes sure that the virtual machine container is led to believe that it has other critical resources as well, such as a keyboard, mouse and display adapter.

Once an OS is deployed into this virtual machine, an administrator can install an application just as if this VM were a physical server, thus maintaining separation of workloads.

Resource utilization improvements

The ability to separate workloads into their own containers while sharing hardware resources leads directly to much improved hardware utilization. For example, suppose you have five servers, each averaging about 10% utilization. With virtualization, you can combine these five workloads into five virtual machine containers and, assuming that benefits are direct and linear, expect 50% utilization with all other things being equal.

Figure 5 gives you a look at how this can improve usage. Obviously, this is an exaggerated and normalized diagram. It shows equal workloads, which is probably not the case, that never change. In reality, bear in mind that there are peaks and valleys. Under a virtualized infrastructure, care must be taken to separate workloads with competing demands among different hosts. That said, you still get major benefit here. In the example I just used, you are able to decommission four servers and leave just one in place. That means less opportunity for hardware failure, lower ongoing power and cooling costs and lower ongoing costs related to server replacement. I call that a win-win!


Figure 5: Improve resource usage with virtualization

Summary

We’ve just scratched the surface in this first part of this Back to Basics series. You have started to gain an understanding for the what and why behind virtualization, a thought process that will continue into part two. Later in this series, we will begin to deeply explore some vendor-specific products and features in an effort to put together the puzzle that is virtualization.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top