Managing small virtual environments (Part 1) – The Basics

If you would like to read the other parts in this article series please go to:

Introduction

As I review the technical documentation landscape, it’s apparent that information is generally skewed to the enterprise and high level midmarket space. Although there is information out there for others, covering the very basic of virtualization is often glossed over by documentation.

In this series, I will provide you with ground-up information about virtualization.

Why virtualize?

Even today, although more workloads run on virtual machines than on physical, there are organizations that have yet to take the virtualization plunge or that have done so only for small workloads. Often, organizations take their first small steps into virtualization when it comes time to replace older servers. In these scenarios, organizations are focused on the cost savings potential of virtualization through the reduction in the need for physical hardware.

This is all well and good, but why does this work? In short, virtualization is all about workload abstraction. Thanks to the hypervisor, which might be VMware vSphere, Microsoft Hyper-V or Citrix XenServer, among others, workloads run on top of the hypervisor-based software layer rather than running directly on the underlying hardware. This abstraction technique makes it simple to shift the thinking from the server to the workload. In other words, IT starts to think less about the hardware necessary to run particular services and more on the services themselves.

There are a number of arguments that I’ve heard over the years about reasons not to virtualize, but most of them can be reasonably refuted when the environment is using the right tools. Here are some of the reasons that some organizations continue to avoid moving too heavily into virtualization.

We don’t want “all our eggs in one basket”

When a server running a single workload fails, just that workload is affected. However, as more workloads are added to a single hardware device, failure of that device affects an increasing number of workloads. In this thinking, by keeping workloads running separately on separate hardware, the organization is better served from an application availability perspective.

This is definitely old-school thinking, especially when you start to think about some of the benefits that can be had with virtualization. First, mainstream enterprise hypervisor products and the services around them provide significant benefits from an availability perspective. Even for organizations that have just a handful of servers, the introduction of virtualization can have a positive availability impact.

Here’s how this magic works:

  • Automated workload migration. In a virtualized environment, when a host fails, workloads can be configured to be brought up on a different physical host. This works with as few as two hosts configured in a cluster. While there will be some disruption to the workload while it boots on another host, the disruption is minimal and is much less than would be incurred in a traditional physical server environment.
  • High availability techniques. Modern hypervisors have availability mechanisms by which critical workloads can run simultaneously on multiple hosts and, if one of the hosts fails, the workload remains operational on the second host and users never even know that a failure occurred.

Obviously, some applications have their own availability mechanisms, which may include clustering, which would achieve similar goals. However, a hypervisor-based solution means that administrators have the capability to deploy high availability for multiple services in a consistent way, which can make the entire environment easier to manage.

I can easily size my physical servers to meet my workloads needs

When physical servers are initially provisioned, most administrators configure them to meet the maximum needs that will be incurred in the lifetime of the server. As time goes on, administrators can adjust resourcing by adding and removing memory, disk space and processing power. However, how many organizations actually do that with physical hardware?

In a virtual environment, with the focus shifting from the server to the workload, administrators can instead decide which resources are necessary for an application and, as needs change, can adjust those resource allocations through simple software tools. No more is there a need to crack open a server to add memory. Now, with a few clicks of the mouse, an administrator can add memory, disk and processing power from a shared resource pool.

In short, resource allocation modification to meet changing needs can be accomplished in seconds in a virtual environment.

Virtualization requires too many skills

It’s true that adding a hypervisor layer to an environment requires the addition of a skill set that can manage that environment, but the configuration doesn’t need to be complex or be onerous to implement. In fact, both VMware and Microsoft make it really easy for organizations to dip their toes into the virtualization waters in a way that eases the company into the technology. Ultimately, administrators will come to see that managing a virtual environment doesn’t have to be greatly different from managing the physical one. A server is still a server, after all, even if it’s just a software construct.

To be fair, as organizations seek to add more virtualization-provided capabilities to the environment, the need for an expanded skill set will grow, but to get off the ground doesn’t require massive effort.

Our services are too big for virtual machines

If you’re running applications that you feel are too big to run inside virtual machines, then the chances are good that these are critical applications that must stay operational. That said, even large applications are easily accommodated by today’s hypervisors, which scale quite well.

For example, in vSphere 5, a single virtual machine can support up to 32 virtual CPUs, 1 TB of RAM and terabytes of storage. Hosts can support 160 logical CPUs, 2 TB of RAM and up to 2,048 virtual disks. Accordingly, “scale” is not really an issue with virtualization these days.

In fact, given the importance of these large workloads, you may actually benefit from using virtualization to support them as you can then make use of virtualization’s abstraction technologies to improve availability and provide more flexibility in the environment.

Today’s reality

While these lines of reasoning may have had merit a few years ago, today’s hypervisors are more than up to the task of running even the most significant workloads and, when used properly, can bring major benefits to the business using them. For organizations that have not yet made a move into virtualization or that are still using virtualization for simple workloads, the time is now to take a more generalized approach.

What about small environments?

Small businesses don’t generally have a lot of servers. For argument’s sake, let’s assume that there are four servers in a small environment. There may be a file server, an application server, a database server and a mail server, for example. How would virtualization benefit such a small environment?

If these four workloads are running on individual servers, then each server is probably configured for peak performance. Further, as these servers are replaced, they will be replaced with servers that are sized for peak performance. Finally, if any one server fails, it will take days to recover while the organization awaits new hardware, rebuilds the services and recovers the data from backup.

In a virtual environment, the following would be possible:

  • No more need to buy four servers. Just two servers would provide the organization with the horsepower it needs and with the availability it desires. All workloads could likely run on just one virtual host with the second host being a member of a virtual server cluster. In this scenario, if the first host fails, the workloads will be brought up on the second host, resulting in a much quicker return to service than would be possible with the physical environment.
  • A new focus on the application. In this scenario, the business can focus on the application and making sure it remains available and operational rather than on the hardware that runs the environment. In essence, the hardware environment becomes just a utility.

Of course, there are some downsides. In this scenario, the organization will need to either build or buy the skill set necessary to operate the environment. This can be done by training internal staff or by hiring a consultant.

Further, there will need to be careful thought given to licensing, both at the hypervisor level and with regard to each individual virtual machine. I’ll discuss licensing in a future part in this series.

Summary

An understanding for the “why” behind virtualization sets the stage for helping SMBs make their way deep into the world of virtualization. In the next part of this series, we’ll bust some virtualization myths and dig deeper into some virtualization features.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top