Tips to Maximize Your Virtualization Effectiveness

Introduction

Virtualization is the de facto standard method by which new applications and workloads are deployed today. However, organizations still leave money on the table when it comes to perfecting their data centers around what is still a burgeoning technology service. In this article, readers will receive seven tips for keeping your virtual environment running in tip-top shape.

Just do it

If you haven’t virtualized anything, get to it. There are still companies out there that haven’t even taken this step. Obviously, though, most have. However, there is still some trepidation when it comes to virtualizing particularly intensive or sensitive workloads. For example, those that require very high levels of I/O or that are extremely sensitive to latency issues – either network or storage related – have sometimes been left in physical environments so that administrators could better control the environmental conditions.

However, there are all kinds of new storage and networking opportunities that can help organizations overcome these challenges and virtualize even the biggest and most latency-sensitive applications out there. For example, with hybrid storage and all flash arrays, companies can eliminate IOPS barriers while still retaining plenty of capacity opportunities. Hybrid storage arrays often present the next best opportunity beyond simply deploying hard drives as they allow organizations to take a much more balanced approach to storage.

Give your data center a makeover

Beyond just looking at storage to see if there might be a good opportunity for improvement, take a look at some emerging opportunities to rethink everything you do. For example, as your existing data center infrastructure approaches its replacement cycle, consider replacing it with converged infrastructure options that have the potential to significantly simplify the way that the data center is supported. Convergence options are becoming more and more common and fall into two broad categories:

  • Macro. Here, think of things like Vblock and NetApp’s FlexPod. Vblock is the product of VCE, which is a partnership between Cisco, VMware, and EMC. Products in this space are generally well-tested and prebuilt combinations of existing products that are sold under a single SKU. They are also supported as a single unit. So, if you have a problem with any part of the environment, you pick up the phone and call VCE. You don’t need to figure out where the problem lies first and you can avoid the vendor finger pointing that often takes place.
  • Hyper. This is a relatively new niche and is one that is burgeoning thanks to the likes of Nutanix, SimpliVity, Scale Computing, and Pivot3. These companies have built appliances based on commodity hardware – mostly – that takes a software-based approach to solving problems in the data center. These appliances take the SAN out of the equation by moving storage into the appliance and very close to the compute. In general, these solutions also leverage custom-built distributed file systems that harness all of the server-based storage and manage it for the use of the environment. These kinds of solutions have the potential to bring major simplicity to the data center and also cost savings. Now, when you need more capacity, you just buy a unit of infrastructure and add it. No muss, no fuss.

Maintain good deployment practices

Sometimes, the ease by which new workloads can be deployed in a virtual environment can be a double-edged sword. Because it’s so easy, organizations often find themselves spinning up new virtual machines on a whim without ever going back to find out if that machine is really needed long term. These kinds of activities have a long-term detrimental effect on the health of the virtual environment as the resulting virtual machines slowly begin to exhaust resources that could be better used for more mission critical workloads.

To combat this issue, organizations should implement policies and procedures that limit this virtual server sprawl. Require justification for the creation of new virtual machines and ensure that virtual machines have some kind of an end date – if they are intended to be used only temporarily – after which those virtual machines are removed or archived. There are products on the market that can help organizations identify and eliminate the issue of virtual server sprawl and zombie virtual machines.

Automate!

How often do you perform the same action over and over? By doing the same thing over and over, time is wasted and opportunities for more value add activities are lost. As more functions move to software in the modern data center, there are new opportunities to automate activities. Software is inherently more flexible than hardware and can be more easily bent to your will.

As a part of this efficiency effort, use the tools that ship with the hypervisor to ease new deployments. For example, most ample use of such features as Host Profiles.

Implement good monitoring tools

No matter the size of your virtual environment, you need to have good monitoring tools that can help you to maximize the effectiveness of that environment. Monitoring tools will help you more quickly identify issues that might impact environmental availability or that could create performance issues that impact business workloads. Good monitoring tools will also be able to help your plan for capacity issues that may arise in the environment. Further, on the capacity management front, the right tools can also be used to predict when particular resources will be exhausted so that appropriate proactive action can be taken.

Stay current

Running older software of any kind can have a negative impact on the system from a number of different standpoints, including security, availability, and performance. In your virtual environment, make every attempt to stick with the latest version of your hypervisor and its interim updates. On the individual virtual machines, make sure to stick with the latest VMware Tools, too. That’s become much easier in recent versions of vSphere since VMware Tools can now automatically update when new versions become available. In the past, updating was a manual process.

Hope for the best but plan for the worst

No one wants failures to impact the operational environment nor does anyone want a natural disaster to wreck the data center. Unfortunately, both things happen in real life and it’s the job of the virtualization administrator to work around such incidents. In the case of simple availability, follow best practices with regard to system design and workload operations to make sure that you don’t create a situation that results in loss of function. For example, build the environment so that it can withstand the loss of a host. After all, hardware will eventually fail. Further, use affinity and anti-affinity rules to make sure that workloads run where they’re supposed to. For example, use an anti-affinity rule to prevent all of your virtualized domain controllers from running on the same physical host.

When it comes to disaster recovery, consider some of the hybrid cloud opportunities that are available on the market. And, when you do so, keep automation in mind and make sure that you have the ability to automatically and seamlessly migrate running workloads from your on-premises environment to the disaster recovery service.

Summary

Obviously, there are just seven of many, many tasks that virtualization administrators must consider when it comes to maintaining an effective environment, but they are important ones that are worthy of a closer look.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top