The storage technology industry has evolved over the years at a steady and consistent pace. One innovation seen in the past several years is the widespread adoption of solid state storage technologies. Solid state memory technologies can encompass everything from your iPod to your computer’s hard drive. The use of solid state technology in a storage device can provide significant performance gains; however, these performance gains are often limited by how the storage device is connected to the computer’s CPU. In this article I will give an overview of solid state drive technology and give an explanation on how innovation (specifically Fusion IO’s innovation) is reducing some of these limitations.
Advantages of Solid State Drives
Solid state drives, by definition, have no moving parts. This one fact is a big part of the advantages seen in solid state drives. For one, since there are no moving parts solid state drives are less fragile than traditional disk drives, which can often have problems caused by dust and other small particles. The fragility of disk drives also stems from the fact that since there are more parts there are more chances for things to go wrong. Another advantage of solid state drives (in some applications) is that they are silent. Of course, the silence is interrupted if cooling fans are used (not that the drive is no longer silent, it’s the fan that wouldn’t be silent). Thirdly, and a major reason why solid state drives have quickly risen in popularity in the consumer electronics market, is that there is no upside-down, the drive can be in any position and can in fact be moving quite abruptly during read and write operations without effect.
Also, because there is no motor in a solid state drive there is no need to power a motor which reduces the power required to perform read and write operations. The lack of a motor also leads to faster read and write cycles because there is always a delay between when a motor receives its power and when it can move. Traditional hard drives often have a read latency of about 10 milliseconds while solid state drives reduce that read latency to about 50 microseconds; a significant improvement. Though all of these advantages are desirable in various applications, in an enterprise level application there are really only three of these advantages which are noteworthy; the enhanced reliability, the reduction in power usage, and the increase in read and write performance.
Connecting to the CPU
Though solid state drives do provide a significant increase in read/write performance, this increase is limited by how the drive is connected to the CPU, which controls the process of reading and writing. Let’s back up a bit here though for a bit of history on the innovation seen in the way hard drives connect to the CPU. A significant time in the evolution of hard drives was the 1980s. This is when it first became popular to separate hard drive controller from the motherboard and keep it with the hard drive. Up until that point a motherboard needed to have a hard drive controller which was generic enough to work with every hard drive. When the hard drive controllers were separated from the motherboards they became included with the hard drive freeing the hard drive manufacturer to build devices however they wanted as long as they provided a dependable hard drive controller as well. When this split happened it was no longer necessary to have the hard drive next to the motherboard; it made more sense to have it in a regular bay and use a connector to connect to the motherboard. These original interface for these connections was developed by Western Digital and called the Integrated Drive Electronic (IDE) interface. This was the beginning of two decades of innovation in how hard drives are connected to the motherboard and the CPU. In fact, the term IDE is still in common use today.
Also in the 1980’s was IBM’s second generation of personal computer; the IBM AT. The AT stood for Advanced Technology. This computer had a motherboard which utilized an AT bus (this AT also stood for Advanced Technology). This architecture for the motherboard was quickly adopted as the industry standard by many computer manufacturers. Soon the IDE interface evolved to support this AT bus and was renamed ATA or Advanced Technology Attachment.
Figure 1: PATA and SATA connectors courtesy of www.tomshardware.com
While the ATA interface did evolve, the next significant time in its evolution was not until the 2000s when Serial ATA or SATA was introduced. This caused the ATA interface to be retroactively renamed Parallel ATA or PATA. The advantages of SATA were significant, one significant advantage was SATA’s Advanced Host Controller Interface or AHCI. The AHCI allows for two very important capabilities; hot swapping and native command queuing.
Host swapping refers to the ability to plug in a SATA hard drive without shutting down the computer, much like a USB drive can be plugged in and unplugged without shutting down the computer. This may not sound so significant when thinking of personal computers, but on an enterprise system the ability to swap drives without shutting down a server is significant in many circumstances.
Figure 2: Graphic of native command queuing courtesy of www.wikipedia.com
Native command queuing is a capability which benefits all applications. What native command queuing means is that the AHCI controller can re-arrange the read and write commands which it receives in order to achieve the maximum efficiency in term of drive head movements. For example, if the AHCI receives three read commands where the first command requires the drive head in a certain position, the second read command requires the drive head to move 180 degrees, and third command requires the drive head to move back close to where it was during the first read, this will waste a lot of time moving the drive head. So, with native command queuing the AHCI will perform read commands 1 and 3 before performing read command 2 which will result in a significant increase in performance when an operation can involve several thousand commands.
Fusion-IO and the PCIe Connection
While SATA was a significant milestone in the evolution of IDE drives, there was one company which took a step back and looked at the entire architecture of a computer as it stands today and said “Why?”. Why are hard drives still using technology which was evolved from 1980s technology. Even though SATA drives can allow for a bandwidth up to 6GB/s this does not include the bridging required to move the data from the PCIe interface to the SATA interface which adds a significant amount of latency. Almost all other aspects of computing has gone through significant changes except this area which significantly affects the performance of computers which rely on thousand and thousands of read and write cycles.
FusionIO did something smart. Really smart. They looked away from the SATA/PATA/AT/IDE interface which had so much built in backward compatibility and thought about how they could connect a hard drive to the motherboard in the most performance conscious way. They decided to connect their solid state drives directly to the PCIe interface. This would then eliminate the need to bridge two interface and result in a much more straight forward approach. The result is that performance can be doubled in many heavy read/write operations.
At the time of writing this article, FusionIO has a great article up on their website about www.answers.com’s experience using FusionIO technology. While this experience is obviously presented to paint FusionIO in the best light possible, it also has a lot of great information and really demonstrates the advantages a large organization can see in their data centres when using FusionIO’s technology. I recommend you read that article to begin your love affair with FusionIO.