Virtualization challenges and mitigation techniques (Part 1)

If you would like to be notified of when Scott Lowe releases the next part in this article series please sign up to our VirtualizationAdmin.com Real Time Article Update newsletter.

Virtual server sprawl

Back in the day, there was a phenomenon known as “server sprawl” which described a state of potentially uncontrolled chaos that led to the purchase of a lot of servers without much up front planning going into the best possible way to deploy the new hardware. This phenomenon has made its way to – and might even be exacerbated by – today’s robust virtual environments. No longer does IT have to procure physical hardware to deploy a new server. Now, an administrator logs into vCenter and whips up a new server in a matter of minutes. In many organizations, servers are spun up and ignored once they’re placed into production. Enabled by the ease of deployment, new virtual servers may not go through the formal, matured deployment checklist that would mandate the use of a firewall, antivirus program, static IP address and so on. Further, uncontrolled virtual server sprawl can lead to an inability for server administrators to keep up with ongoing maintenance, especially if the servers have not been deployed as per organizational standards. So, this kind of sprawl can lead to security issues as well.

Mitigating virtual server sprawl and the negative repercussions can be accomplished by doing the following:

  • Establish a process by which the deployment of a new virtual server must be justified based on business need. Remember, a virtual server still requires computing resources, so it’s not a “free” server.
  • Ensure that all deployed servers are put through the same deployment process that you would for any other machine. Make sure that all required clients and agents – Configuration Manager, antivirus, etc. – are installed. Consider using a deployment template to ease this process.

Resource bottlenecks

The beauty of a fully virtualized environment is the fact that an organization is able to better use existing hardware resources while at the same time reducing costs and improving service availability. However, any time that a service has to operate on a shared infrastructure, there is the possibility that a particular resource might become constrained. Again, there is beauty in a shared environment because this kind of constraint can be targeted and corrected without having to waste money on non-essential resources. For example, if you’re running low on RAM in a VMware cluster, just add RAM. You don’t need to add a whole new host just to get more RAM.

But, you first have to identify the fact that you may be getting constrained on a particular resource. This is accomplished through careful monitoring. Whether you use the built-in tools that come with your hypervisor or you acquire a comprehensive third party monitoring suite, this risk can be mitigated by gaining an understanding for the high level metrics that monitor the health of the individual resources. In a future article in this series, I will outline some of these key metrics.

Lower than anticipated ROI

Let’s face it – many, many virtual environments started life due solely to the ability to save money on server lifecycle replacement. Once organizations began to see some of the other great benefits that come from virtualization, efforts accelerated. However, organizations based those criticalinitial investments on certain ROI figures. There is always a risk that the real returns will fall short of anticipated ones, particularly for organizations that were aggressive in their estimates.

These kinds of risks are easy to mitigate but might change the nature of the project rollout if ROI expectations are reduced, which happens to be one of the mitigation options on the list below:

  • Be a bit less aggressive in calculating initial ROI expectations.  Expect that something unanticipated will arise, such as a lower-than-expected consolidation ratio. Obviously, the economics of the initiative still have to work.
  • Lower the consolidation ratio, but hope for more. In your planning, don’t try to plan for some of the insane consolidation ratios (50 or 100 enterprise class VMs on one host, for example) sometimes touted by the hypervisor vendors. Frankly, you’re not going to get massive ratios if you’re running typical workloads on typical hardware. Look for independent benchmarks that might point you to a real-world idea on the topic. I’ve seen benchmarks that show anywhere from five to fifteen virtual servers as being a good rule of thumb. These numbers need to be carefully considered as there will be all kinds of variables at play. Now, to confuse the issue: You might be able to increase your ratio by making use of servers with newer processors (such as new 6 core processors) and more RAM. In my job, we recently moved to servers with twelve processing cores (2 x 6 cores each) and 96 GB of RAM, which allowed us to avoid having to add another VMware vSphere server to our cluster. Our current VM to host ratio is 15:1 and we still have room to grow. That said, we’re a small place, so we can tolerate the higher ratio while still maintaining good performance.
  • Consider the non-direct benefits. A virtual environment brings more to the table than just the possibility for lower costs. Figure these benefits into your calculations to make a big-picture justification for the initiative that includes both financial and operational (indirect financial) benefits.

Initial funding challenges

As budgets are slashed and companies need to save money in the short term, fund to perform a first run implementation of a virtual environment may not be available or there might not be enough money to expand the virtual environment in order to realize the streamlined vision.

For organizations that have units that may have already begun virtualization initiatives, it might be possible to kick start additional short-term efforts by building on existing platforms, even if these platforms aren’t quite ideal. This would necessitate realizing the virtual vision over a longer period of time, but may be a way to realize some short-term savings that could be funneled into long-term efforts.

Cultural issues

Relatively recently, I heard a story about an IT Director at an organization that absolutely refused to consider virtualization in his company. The reason: No one that wants uptime would ever consider virtualization since a host failure would bring down multiple services. This same organization was a poster child for the need for virtualization, too – underutilized servers getting close to warranty end, single use, etc. This is a perfect example of a cultural or personal mindset that was thwarting just mere consideration for a virtual environment.

Although most organizations can at least recognize the benefits of the technology, virtualization can introduce some new challenges for IT staff. In days gone by, IT departments maintained very separate server, networking and storage groups. Virtualization smashes the entire infrastructure into one ball that needs to be managed as a whole. This can create a lot of angst inside the IT department. Strong leadership is required to bridge the divide so that groups can find common ground and work together.

This humanrisk is multiplied when virtualization initiatives are intended to consolidate previously autonomous domains. Individual units have likely spent years creating robust technology environments to support their operations. It is human nature to be skeptical of something that might be considered a radical change. While some units will likely embrace the process, others may be skeptical or even combative and resist efforts in a myriad of ways.

Virtualization initiatives must be well-supported from the very top of the organization as it will be up to the CIO to build relationships that can transcend and quell these concerns and ensure units that their needs will be met in the new paradigm. This communications process needs to take place a many levels and must include unit directors, unit IT personnel as well as other individual unit employees that have a stake in the success of these projects. After all, a major failure in a deliverable creates issues that run along the entire chain of command.

Software licensing

Not all vendors create their licensing policies the same way. Some vendors have embraced virtualization, multicore processors and other advances and incorporated these new options in their license agreements. Others, unfortunately, either disallow or make it unattractive to run their products in a virtual environment.

To mitigate the potential for massive fines related to improper use of software, careful attention must be paid to all software licenses. Any software licenses that don’t fit the new paradigm must be negotiated and corrected, if possible. Otherwise, unsupported products need to remain in the physical environment or replaced with competing solutions.

Note that software licensing doesn’t just mean the hypervisor itself; licenses for every piece of software that runs inside the virtual environment needs to be reviewed for compliance.

Staff skills

As I mentioned before, virtualization brings to the table the need for new ways to look at the infrastructure. Previously disparate groups – systems, networking and storage – must come together for the common cause and, more than likely, each area will need to expand their scope of knowledge a bit to gain at least a basic understanding for the others areas, particularly so that the environment can be effectively monitored. This requires a great deal of knowledge, skills and experience.

This domain overlap is unavoidable and there’s an added wrinkle: the hypervisor itself. Organizations must develop and maintain skills around the use of the selected hypervisor software, which will become mission-critical infrastructure that is just as important as all of the other components.

Mitigation is simple but can be expensive. Ensure that staff members have the skills necessary to operate in the virtual environment and keep them current on the technology so that the company can continue to improve the environment and leverage it for maximum benefit; this is absolutely critical to the successful implementation and operation of virtualization efforts. Without taking this necessary training step, service levels will falter, users will remain skeptical of the change and the organization risks dissatisfaction with the environment that can lead to poor business results.

Summary

Although virtualization has become commonplace, as the technology begins to encompass even tier 1 workloads, new challenges arise that must be mitigated. When virtualization was used to support lab and development systems, some of these issues could be ignored but once it moved into the mainstream in an organization, these issues need to be tackled and corrected.

If you would like to be notified of when Scott Lowe releases the next part in this article series please sign up to our VirtualizationAdmin.com Real Time Article Update newsletter.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top