QuickPath Interconnect

I’ve written previously about an AMD-driven technology called HyperTransport designed to increase data transfer rates. A key feature of HyperTransport is that it’s a point to point interconnect system, as opposed to a bus system. In this article, I’ll give an overview of a competitor to HyperTrasport – Intel’s QuickPath Interconnect.

Front Side Bus

Before I explain what QuickPath Interconnect is, I will take a step back and explain what the traditional architecture of chip-to-chip communications is like in computers. For many years communications between processors and memory within computers used what is commonly referred to as a Front Side Bus. All communications between the CPU and memory has to travel over the same Front Side Bus. Because all communications travel over the Front Side Bus, there must be some extra data added to the communications to ensure proper communications; such as addressing.

Also, bus systems by design only allow one communication to happen at a time. This means that if something needs to communicate with the CPU it must wait until the current communication has ended in order to start its own communication. Alternatively, interrupts could be used for priority communications. Interrupts, though effective, also add a certain amount of overhead to the overall Front Side Bus communications. All of this waiting, combined with the overhead can be a performance hindrance for high speed applications.

Over the last few years, as processors increased their performance significantly, the speed at which the Front Side Bus could operate was a limiting factor of overall computer performance. This is because even though the processor could do a lot of work very quickly, it needed to continually wait for the Front Side Bus to deliver the proper communications; so the processor would often be idle. The speed of the Front Side Bus also rendered meaningless the speed of RAM since the speed at which the RAM could operate was significantly higher than the maximum speed of the Front Side Bus.

With the increased use of multiple processors, including powerful and capable graphics processors, and very fast memory, the limitations of the Front Side Bus are becoming a bit ridiculous. That is the impetus for the design of technologies like HyperTransport which is a point-to-point interconnect system and therefore eliminates many of the limitations of the Front Side Bus like interrupts and addressing (you don’t really need addressing when there’s only two points – if you didn’t send it, then you should be receiving it!).

QuickPath Interconnect

But HyperTransport, developed by AMD and now managed by the HyperTransport Consortium, isn’t the only game in town. Not surprisingly, Intel has developed its own point-to-point interconnect system optimized to work as a communications mechanism between many processors. Though they were significantly later in the design of QuickPath Interconnect, they did do a great job.


Figure 1: QuickPath Architecture courtesy of www.intel.com

Like HyperTransport, QuickPath Interconnect is designed to work with processors that have integrated memory controllers. Also like HyperTransport, QuickPath Interconnect is designed as a double data rate (DDR) technology. Normally when data is digitally transmitted between two points, data is read as either high or low which represents either a 1 or 0. This data is read whenever the clock produces a high signal. With DDR, data can be read on the rising and falling edges of a clock signal. This means that in one full clock cycle a DDR capable transmission data can be read twice, producing twice the data rate.

Also like HyperTransport, QuickPath Interconnect reduces the overhead found in Front Side Bus architectures. One way it does this is by eliminating some addressing since QuickPath Interconnect is a point-to-point technology. In fact, not only is QuickPath Interconnect a point-to-point technology, it is also a full-duplex communication channel having 20 dedicated communication lanes for each direction. QuickPath Interconnect does have some overhead though. QuickPath Interconnect actually has more overhead than Hypertransport; to send 64 bits of data QuickPath Interconnect requires 16 bits of overhead, where HyperTransport requires 8 or 12 bits for reads and writes respectively.

Portocol Layers

Intel’s QuickPath Interconnect is one part of a larger architecture that Intel calls QuickPath Architecture. The QuickPath Architecture is designed to cover five networking levels which are roughly equivalent to some of the OSI network layers.

The Physical Layer of the QuickPath Architecture describes the physical wiring of the connections including the data transmitters and receivers and the 20-bit wide lanes in each directions.

The Link Layer of the QuickPath Architecture describes the actual sending and receiving of data in 72-bit sections with 8-bits used for CRC error detection. This makes a total of 80 bits that are sent over each of the 20 lanes in each direction!

The Routing Layer is responsible for sending a 72-bit chunk of data to the link layer. Within this 72-bit chunk of data is 64-bits of data and an 8-bit header. The 8-bit header consists of a destination and a message type. These 64-bits are what Intel uses to calculate total throughput of QuickPath Interconnect (as opposed to all 80-bits).

The Transport Layer is responsible for handling errors in the data transmission and will request a retransmission if errors are found.

The Protocol Layer of the QuickPath Architecture handles cache coherency and is also how a higher level program would access the data transfer mechanisms in QuickPath Interconnect.

QuickPath Interconnect vs. HyperTransport

So now that you’ve learned about QuickPath Interconnect and have reviewed my previous article on HyperTransport you  should have a good idea of how the industry is moving away from the Front Side Bus architecture – for the benefit of us all. But you’re probably wondering which technology is best. As usual, that’s a difficult question to answer. Currently is seems that QuickPath Interconnect has a slight overall performance advantage over HyperTransport, but HyperTransport is designed as a much more flexible technology.

QuickPath Interconnect is mainly designed to connect multiple processors to each other and to the input/output controller, as shown in figure 1 above. HyperTransport does that but can also be used for add-on cards and as a data transfer mechanism in routers and switches. HyperTransport is also an open technology which I think gives it a significant advantage over QuickPath Interconnect which is an Intel technology. This is still early in the development of both of these technologies though; especially for QuickPath Interconnect. Over the next few years you’ll start to see these technologies integrated into more and more computers and you’re likely to see some innovation in each of these products which increases their performance. For QuickPath Interconnect, I’d also expect to see some diversification of how it is used so that it will truly compete with HyperTransport as a data transfer mechanism for many uses.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top