Evaluating Your Options for Desktop Virtualization (Part 3)

If you would like to read the other parts in this article series please go to:


My goal in writing this article series has been to familiarize you with a representative sampling of some of the various VDI solutions that are available and to talk about how those solutions differ from one another. So far I have focused exclusively on Microsoft solutions, but in this article I want to turn my attention to a Citrix product called VDI-in-a-Box.

VDI-in-a-Box was created by a company called Kaviza, which was recently acquired by Citrix. A few months ago I was given the opportunity to deploy VDI-in-a-Box and was so impressed by the experience that I just had to write about it.

If you think back to the first article in this series, you will recall that when I talked about deploying a VDI infrastructure using Microsoft’s Remote Desktop Services, the deployment was complicated to say the least. Inbound connections (from outside of the organization) came into a TS Gateway. From there the incoming connections were load balanced and then a connection broker matched end user sessions to virtual computers running on Hyper-V servers. The infrastructure required to implement VDI in this was both complex and expensive.

VDI-in-a-Box takes a completely different approach to VDI. It is designed to be inexpensive and easy to deploy, while also providing adequate performance and scaling to fully accommodate an organization’s VDI requirements.

One of the things that CITRIX does to help cut cost while also reducing complexity is to use a shared nothing architecture. Each VDI-in-a-Box server contains everything that it needs to run independently. In contrast, a Microsoft’s Remote Desktop Services based VDI infrastructure can be configured in a shared nothing manner, but doing so comes at the cost of sacrificing fault tolerance.

Microsoft and other VDI providers typically achieve fault tolerance through clustering. The actual virtual desktops run on virtualization hosts (in the case of a Microsoft deployment, these are the Hyper-V servers). The virtualization hosts are typically clustered so that if a host fails or needs to be taken offline for maintenance, the virtual desktops that are running on the server can be failed over to another host. That way there is no down time for the end user.

The type of clustering that I just described tends to be expensive to implement. Although the exact requirements vary from one vendor to another, host clustering requires two or more physical servers and any required software licenses. Although server hardware and server licenses don’t come cheap, the bulk of the cost is almost always tied up in shared storage.

What usually happens is that the files that make up the individual virtual desktops are stored on shared storage. That way, they are accessible to every host server within the cluster. Shared storage can take many different forms. In low end environments each host within a cluster might be connected to a Network Attached Storage device via an iSCSI connection. However, the end user experience is directly tied to the performance of the storage subsystem. If the cluster nodes are slow to access the shared storage then the end user’s sessions will also be slow. That being the case, many organizations find that it is necessary to build a SAN and connect the cluster nodes to it via Fibre Channel.

I say all of this as a way of underscoring the idea that traditional VDI deployments are expensive and complex. Since VDI-in-a-Box is designed to decrease the cost and complexity of VDI deployments, the engineers who developed the product started out by eliminating the requirement for shared storage. This results in significant savings because some analysts estimate that shared storage can account for as much as half of the cost of a VDI deployment.

As you may recall, I mentioned that Microsoft’s VDI solution can also be deployed without shared storage, but in doing so you sacrifice fault tolerance. In contrast, VDI-in-a-Box is designed to remain highly available even without shared storage.

In addition to doing away with the shared storage requirement, VDI-in-a-Box does not require you to deploy load balancers or connection brokers. In fact, about the only thing that VDI-in-a-Box has in common with more traditional VDI deployments is that it does require computers running hypervisors that can host the virtual desktops.

You might have noticed that I said that VDI-in-a-Box requires computers running hypervisors rather than saying that it requires servers running hypervisors. The reason for this is that VDI-in-a-Box is designed to run on commodity hardware. While there is certainly nothing stopping you from running VDI-in-a-Box on server hardware, you can just as easily use PCs.

With that said, I want to take a moment to show you what the VDI-in-a-Box architecture looks like. You can see a typical VDI-in-a-Box deployment in Figure A. For the sake of comparison, Figure B shows a typical Microsoft VDI deployment.

Figure A: This is what a typical VDI-in-a-Box deployment might look like.

Figure B: This is a sample Microsoft VDI deployment.

So as you can see, a VDI-in-a-Box deployment is really simple. So with that in mind, the million dollar question becomes how did Citrix do it?

The VDI-in-a-Box Servers consist of commodity hardware running a hypervisor. VMware ESX and ESXi as well as Citrix Xen Server are all supported. The VDI-in-a-Box software exists in the form of a virtual appliance riding on top of the hypervisor.

The virtual appliance hosts the virtual desktops, but it also has a built in connection broker and load balancer. Computers running the VDI-in-a-Box appliance collectively form a grid. Virtual desktop images that are imported into the grid are replicated to each participating computer. User requests are automatically load balanced among the computers in the grid and the grid is also designed to provide high availability.

One of the coolest things about the way that the grid works is that you are free to mismatch hardware. The grid will load balance user requests across any available hardware, but will not attempt to run the same number of virtual desktops on each host unless all of the hosts are equipped with similar hardware.

A few months ago, I saw a VDI-in-a-Box demo in which a laptop was added to an existing grid as a host. The laptop, which was only capable of hosting a couple of virtual desktops, ran comfortably in the same grid as a high powered server that hosted dozens of virtual desktops.


I have to admit that I am in awe over the technology that Citrix uses in VDI-in-a-Box. Even so, I feel obligated to point out that my purpose in writing this article series is not to recommend one VDI solution over another. I am only trying to provide my readers with a representative sample of the various VDI solutions that are on the market.

I also need to point out that I cannot in good faith recommend VDI-in-a-Box as a VDI solution. Don’t get me wrong. There aren’t any major problems with it that I am aware of. It’s just that I have only ever deployed VDI-in-Box in a lab environment, so I have no firsthand knowledge of how it performs in a production environment.

If you would like to read the other parts in this article series please go to:

Leave a Comment

Your email address will not be published.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top