Xsigo introduces powerful I/O virtualization

Consider the following diagram. In it, you will see a very minimal (and probably too minimal) cabling diagram for a single vSphere host. At the left hand side of the diagram is a server with four NICs/HBAs. In reality, you’ll probably have a lot more than four, but for simplicity sake, let’s keep it at two 10 GbE NICs and two Fibre Channel HBAs. As you can see in the diagram, you need a cable to each NIC/HBA and at the other end of that NIC or HBA is a specific device that corresponds to the purpose of that NIC or HBA.

Now, add a whole bunch of additional connectivity, such as more Ethernet NICs for things like vMotion and iSCSI storage and you can start to see where this single server would end up with all kinds of cables running all over the place to support all of this connectivity. Now, extend this cabling morass to include five, ten or even hundreds of hosts. Pretty soon the server room has a lot of cabling with a need for high-end infrastructure to support all of that cabling.

To make things a bit more challenging, consider the fact that most of this connectivity won’t be used at anywhere even close to full capacity. Those 10 GbE ports aren’t likely to be used at anywhere close to 100% capacity, nor as any of the other connections. They’re there simply to satisfy a need to maintain particular types and quantities of connectivity – storage, networking, management, vMotion, etc.

Does this sound familiar? It wasn’t all that long ago that we attacked the problem of underutilized servers by implementing whole new virtualized infrastructures in an effort to make better use of resources and provide new levels of availability. However, the connectivity – or I/O – portion of the equation has only recently started to be addressed.

Today, I attended a product announcement at VMworld from a company called Xsigo. Xsigo provides a product that virtualizes I/O in a many-to-many way. Now, just one or two cables can replace what was previously five, ten or even a cool dozen cables connecting a single server.

As you can see in the figure above, one cable has replaced the four from the first diagram. The Xsigo director has replaced the stack of Ethernet and Fibre Channel switches. In a production deployment, there would be multiple Xsigo directors to provide high availability, but you can clearly see that there is a major architectural change going on here. There are fewer cables and a need for fewer switch ports in the data center. Drivers have been installed on the server to enable this multiplexing of I/O over a single link and the Xsigo director handles the redirecting of traffic to the appropriate location – iSCSI storage, Fibre Channel storage, front-end Ethernet, etc. Now, all of those various NICs that are used just to separate traffic can share this single link while Xsigo handles the management.

As needs increase, Xsigo can scale from GbE to 10 GbE all the way up to 64 Gb Infiniband.

In upper medium and high-end deployments, Xsigo is certain that they can provide a major savings over traditional architectures.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top