The world is changing really fast. Just a scant ten years ago, it was considered cutting edge to do any kind of server consolidation using newly released virtualization tools. Today, virtual machines whiz between servers and between data centers as a part of routine operations and virtualization has become accepted as the de facto standard location for most new workloads.
Virtualization works due to the powerful abstraction that it provides to administrators, shielding workloads from the actual underlying hardware. Instead, virtual machines float on a software-provided hypervisor layer that presents each virtual machine with a common, but customizable operating environment.
As the world continues adding more and more workloads to virtual environments, this abstraction becomes ever more important and enables a whole host of capabilities that have proven to be a boon for many. Before the implementation of comprehensive virtualization, for example, implementing highly available systems required complex clusters of servers and ensuring the workloads had the resources they needed meant buying and upgrading physical hardware. Both were slow and expensive undertakings that forced IT to spend increasing amounts of time focused on a company’s technical underpinnings rather than on top line business initiatives.
Today, of course, if a workload needs more resources, we just fire up the vSphere client, add a bit of RAM or add another virtual processor and go about our day. In fact, in many cases, it’s not even necessary to bring down the running workload to get the job done. With hot add technologies, resources can be added on the fly.
This kind of ability to simply change workload configuration and shift workloads around to different hosts is the work of the hypervisor layer. Even when a company is running servers from different manufacturers, each configured differently the hypervisor layer presents to the running virtual machines a common set of features and functionality, making it possible to easily perform workload migration.
The most widely used hypervisor on the market today is vSphere. It powers a majority of the world’s virtual workloads and enjoys the support of a massive ecosystem that has sprung up around it. However, although the company is credited with remaking the IT landscape as we know it, they are not immune from market forces and, frankly, internal screw ups. VMware’s products have the perception of being relatively expensive, but they have provided unique and powerful functionality for many years. Over those years, VMware has made some strategic mistakes – i.e. vRAM – that likely cost them customers. However, the functionality gap between vSphere and other hypervisors shielded the company from the worst of the fallout of those errors and from their higher pricing.
That feature advantage is waning as other hypervisors – most notably Hyper-V 2012 – work to close the gap and, in the process, gain the attention of those seeking to lower their ongoing IT costs. At the same time, organizations are considering the use of cloud providers for certain services as another way to either reduce capital costs or run workloads in a scalable environment.
Over the last year, a lot has been written about Microsoft’s release of Hyper-V 2012 and what it means for VMware in the long run. Personally, in the long term, I see Hyper-V s eating away some of VMware’s market in these ways:
- People replace their test labs with Hyper-V.
- Organizations deploy “tiers” of hypervisors, much in the same way that’s done with storage today.
- Some (smaller) companies completely jettison vSphere in favor of Hyper-V.
I don’t think that last point will happen en masse any time soon, but over time, it will happen, particular as Hyper-V gets better.
However, this fracturing of a cohesive whole can come at a cost. Organizations have a huge investment in their vSphere environments and are well-trained in the use of that platform’s management tools. Simply adding an additional platform, such as Hyper-V, or pushing some virtual machines into the cloud adds management complexity that needs to be overcome in order for these endeavors to be positive.
The image shown in Figure 1 demonstrates the management challenges that are introduced when additional platforms become potential workload targets.
Figure 1: Traditional management challenge
This is where HotLink comes in. HotLink is a leader in enabling organizations to quickly and easily support multiple hypervisors and even cloud services, all from within already familiar management consoles. That’s exactly what HotLink provides. Through the product’s deep integration with existing management consoles, such as vCenter, organizations gain the ability to manage all of their various platforms using tools and processes that are already familiar.
Figure 2: HotLink management layer
It’s important to note that HotLink is not a company that creates management consoles. Instead, their products integrate into tools that you already have and love (or, maybe hate!) so that there is little to no learning curve to worry about during deployment. Once fully deployed, you can manage all of your assets from, for example, vCenter. This includes migrating workloads both between different vSphere hosts, but also between vSphere and Hyper-V as well as managing cloud providers, such as Amazon.
In getting back to the idea of abstraction, HotLink effectively enables companies to abstract at the hypervisor level, enabling new opportunities for running workloads. HotLink almost becomes a layer in an organization’s IT architecture since it works fully behind the scenes and makes seamless the administrative experience of working with many different services. In Figure 3 below, you can see HotLink in action. You will quickly note that there doesn’t seem to be much to see at first glance. That is, until you look at some of the data center names in the navigation area. Through the use of the HotLink product, this vSphere Client instance is successfully managing workloads across a number of different hypervisors and cloud providers.
Figure 3: HotLink in action
In addition to deep vSphere support for management, HotLink is adding capabilities to System Center Virtual Machine Manager to enable organizations to migrate Hyper-V-based virtual machines from local hosts to cloud providers. It should be noted that you do not need SCVMM to manage Hyper-V hosts in your existing vSphere environment. HotLink can replace the other native toolsets that you may use, allowing you to manage everything via vCenter
Figure 4: SCVMM support
HotLink uses a feature they call the Transformation Engine to abstract virtual infrastructure metadata and decouple VMware vCenter from the underlying VMware vSphere hypervisor. This allows companies to choose the right hypervisor for the job. HotLink handles the transformation behind the scenes. In addition, administrators still have the ability to clone and snapshot existing virtual machines and can actually deploy from a single template for all hypervisors and migrate workloads between cross-platform hosts. This live migration capability works with Hyper-V, XenServer and KVM virtual machines all using VMware vCenter.
- VMware vSphere
- Microsoft Hyper-V
- Citrix XenServer
- Amazon EC2
HotLink enables the same kind of seamless integration with Amazon EC2 and CloudStack that is enjoed with on-premises hypervisors.
Normally, I wouldn’t necessarily write a full article about a specific product like this, but I see HotLink as potentially groundbreaking in the way that the company views and manages workloads and believe that such tools are going to grow ever more important as IT departments attempt to splinter their existing vSphere environments in favor of more cost effective solutions.