Virtualization, the cloud and traditional computing: What’s the right answer?

Introduction

Although virtualization has created massive opportunity, it’s not the only model out there. Is it the right one or should organizations look more closely at legacy physical implementations or embrace cloud alternatives? In this article, we’ll look at all three models and gain an understanding about the pros and cons of each and learn why one size does not fit all.

Let’s Begin

Let’s start this article with a quick poll: How many times today did you see or hear the word “cloud” on the web sites you visit, in the technology literature that you read or in the sales presentations that you heard? I’m willing to bet that this word came up many, many different times, in many contexts and that, attached to it, were many promises.

It wasn’t all that long ago that we heard the same kind of hype for virtualization. No longer did IT organizations have to buy a separate server for every workload. Now, a single server can host a few, or even dozens or more, applications without breaking a sweat. After all, the model under which many IT departments operated was to push the traditional x86 computing model only so far. Servers were spec’d for peak loads but for much of their operating life they hummed along in the single to low double digit utilization percentages. Virtualization turned this idea on its head. With virtualization, many applications would share the resources of one or just a few servers and peak loads would be controlled by the hypervisor to ensure that all workloads remained available and operational.

So, was the physical server model “wrong” in any way? Not really. It was simply inefficient; with virtualization, the shared nature of the resources could bring down costs and, when architected properly, improve overall service availability. But, one thing remained constant: Although some organizations simply outsourced their data centers, most kept their virtual environments operating in their own internal data centers. The paradigm shift was internal and localized; it was tactical and operational in nature. Obviously, there have been massive benefits and virtualization itself has brought about a transformation for the way IT does business, but the overall business has remained basically unaffected. Internal IT operations shifted around and maybe costs came down a bit, but the company kept on doing business as usual.

Virtualization has also brought with it massive demand for new skill sets. While larger organizations have always had a need for skills such as deep storage management, Fibre Channel and the like, virtualization has pushed the need for these kinds of skill sets into smaller organizations that wish to reap the full benefit of the virtualization technology wave. Further, the need to learn about the hypervisor itself has become a major need with companies such as VMware establishing popular and sought after certifications to meet this demand. Virtualization and the need to be able to carefully monitor and manage these modern day environments has also created very successful new companies such as Veeam and VKernel and has launched to success many sites dedicate to the craft, including our very own virtualizationadmin.com.

Even with all of this success and the associated behind the scenes IT changes necessary to get virtual environments in place, we still have physical servers. Very few organizations are 100% virtualized. There are still many uses cases that call for physical servers. Sure, for most of these use cases, a virtual server could do the job, but the task is performed more efficiently with a more traditional approach. More on this later.

Enter the cloud

Now, we have the cloud, which means different things to different people. For some, the cloud is seen as another weapon in IT’s arsenal to provide services to users. In the case of the cloud, though, these services are provided by third parties – in legacy vernacular – they’re outsourced. For others, the cloud is seen as a way to jettison the expensive IT department and, I hate the analogy, treat IT like electricity. I don’t believe that’s an achievable goal, but it does serve to demonstrate the range of possibilities with the cloud.

While cloud can be akin to outsourcing a service, it can also provide organizations with metered services that allow it to pay only for what it uses rather than paying for infrastructure that might otherwise sit idle. When the right providers are selected, a cloud installation can also provide on-demand scalability and allow an organization to deploy massive services without having to buy massive infrastructure. Or, cloud can allow an organization to run 98% of the time at one performance level and when, for example, a new product is launched, ramp up capacity for the other 2% of the time.

Panacea

So, what is the right answer? Is it to go back to the days of physical servers? Is the right answer to simply continue down the virtualization path alone? Or should organizations simply throw away their data centers and move absolutely everything to the cloud?

The honest answer is that no one answer is correct. Every organization has different requirements and there are different use cases for all of the options. In fact, in order to best meet business needs, most organizations will adopt some combination of the options. Physical servers will remain in place when there is local demand and there is a lot of it. Virtual infrastructures certainly aren’t going anywhere soon, either. These now well-tuned environments are humming along providing foundations on which businesses run. “Rip and replace” for a perfectly functional service isn’t something that many businesses want to consider, unless doing so has major benefit.

Physical infrastructure

There are still times when a solid, reliable physical server just makes sense. Here’s an example: At Westminster College, we have just a few physical servers remaining. Here are their roles and the reasons why we keep them around:

  • Security camera server. We maintain a server dedicated to managing and storing all of the various video feeds from our IP-based security cameras. This system uses pretty significant network and storage resources and doesn’t really need the availability features that are found in our virtual environment. Although we could deploy relatively inexpensive iSCSI-based storage en masse to support this service, we’ve instead decided to deploy the service with a lot of inexpensive, direct-attached SATA disks.
  • Backup services. At Westminster, we continue to maintain a traditional server and data backup service based on Microsoft Data Protection Manager 2010. Although DPM 2010 can run virtually, we actually house our backup in a secure secondary campus location.
  • Database system. At present, our primary SQL Server 2008 R2 instance runs on a physical server. However, this is due primarily to resources in the virtual environment. We’ve recently beefed up our virtual environment with a new SAN and additional RAM and I now feel that the infrastructure will handle anything we can throw at it. As such, this service will migrate to a virtual machine in the near future.

In each of these cases, there was a reason that we maintained these workloads on physical servers rather than moving them to virtual machines.

When it comes to raw performance and local scalability – that is, inside the confines of the organization – physical servers will beat their virtual counterparts hands down. Some of today’s beefiest physical servers sport 24 processing cores, incredible RAM density of 192GB and support for many terabytes of internal storage. If you’re seriously pushing a huge workload and you can’t break it down into smaller chunks, physical servers remain a high need. If processing cores matter to you, then VMware vSphere’s limit of 8 virtual CPUs might be a bit too strict for your needs.

Obviously, there is a downside to physical servers as well, and these have been well-document. They include inefficient use of resources, significant power requirements, and more.

Virtual infrastructure

I’ve already mentioned some major benefits of virtualization, but this has been the next step in IT’s journey and it’s been a great one. Today, I don’t have to wait weeks to provision a new service; I simply fire up a new virtual machine, which takes all of 5 minutes. If a service starts to hit 100% processor utilization, I simply hot-add another processor. If disk space becomes tight, I expand the virtual disk. Virtualization has provided IT with incredible flexibility and an ability to quickly and immediately react and correct resource issues.

If I had to pick a downside to virtualization, I’d have to say complexity even though I see the whole environment and elegantly simple. When I step back, though, I realize just how many moving parts – servers, storage, hypervisor, networking gear, Fibre Channel switches – there are to keep it all running. That said, it’s still easier than managing 50 or 60 physical servers!

Cloud services

As I stated earlier, I see cloud services as an outsourcing play; in essence, unless you’re building your very own private cloud, you’re “renting” someone else’s infrastructure and riding your services on top of that. Or, you’re renting someone else’s complete service offering. Either way, you don’t own the infrastructure. For some organizations, this is a good thing but it’s not for everyone.

Let’s assume for this article that the cloud plays revolve around the use of services such as the Amazon Cloud or Microsoft’s Azure. With Azure, for example, and assuming that customers are using Hyper-V, there is the potential to move virtual machines back and forth between local Hyper-V instances and the Azure-based cloud. Imagine the potential for disaster recovery scenarios or scenarios in which a company had to quickly and efficiently scale services beyond what their existing Internet connectivity speed could handle. As an aside, I see Internet bandwidth as the cloud’s Achilles Heel and, until we get cheap, fast, reliable connectivity anywhere and everywhere, the full potential of the cloud can’t be realized.

With the plethora of P2V tools available, similar goals can be achieved by moving existing physical machines to virtual machines and, finally, to the cloud.

The answer

I gave it away earlier, but I believe that the “right” answer for which choice to make it actually to analyze each and every service and pick the right platform for it. I indicated earlier that Westminster still has a few services running on physical servers. These use cases make sense. We have much of our infrastructure running virtually as well. However, where it makes sense, we’ve adopted cloud-based services. For example, our learning management system, which for a college is a mission critical application, is hosted in the cloud. Of course, that service is tightly integrated with our local, on site services in order to ensure that we maintain a common user experience across our service portfolio.

So, when that cloud provider calls and asks you why you haven’t simply dumped it all in the cloud, remember that it’s not about where it all runs, but about how it runs and what value is being provided to the business. Pick the combination that makes sense for you.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top