Is flash storage the future of the cloud?

Flash? No, we’re not talking about the much-maligned Adobe Flash Player. Or even the superhero. We’re talking about flash storage, a technology invented by Fujio Masuoka while working for Toshiba back in 1980. And it may just be the solution to our container storage problems. You might hear the term “array” a lot in reference to flash memory, and a flash array is nothing but a storage array in which all the persistent storage media comprise of flash. Though the technology was quite limited in the 80’s, flash technology has advanced significantly, and apart from being the standard for memory cards, it has also recently replaced RAM on SSDs. When you hear companies offering flash fabric arrays and all flash arrays (AFA), they’re basically SSDs lined up to work together the same way hard disk arrays are set up. The similarities end there, as flash supports a much higher latency and doesn’t need controllers with batteries and ram like disks do.

Flash in the storm

The initial success of flash storage has often been attributed to the the boot storm phenomenon, which is basically a demand for tremendous IO rates at fixed points in time. The root cause of this problem was need to boot up virtual desktops at the start of the day that require OS files, applications, and personalization files for specific users. Imagine booting a PC. Now, imagine booting a thousand at the same time. While AFA is a good fix for this problem since it delivers that huge burst of IO needed to deal with boot storms, with containers we’re not booting a million operating systems and don’t have the same problem. That’s not to say containers don’t have storage issues of their own, and some would even go as far as to say containers main issues are with regards to storage.

Container storage

During the earliest days of the introduction of containerized microservices, storage was still provided from traditional NAS appliances, creating a significant performance reduction as every storage request was required to pass through a network. Additional problems occur when high-read IOPS are required as we spawn more and more containers simultaneously, and the master image has to support all the reads generated by each container. This happens because each time a container is created, it reads from the master image and gets the required piece of code to create the runtime environment for that application.

Software-defined storage

Software-defined storage (SDS) solved the container storage challenge. Software-defined storage is the delivery of various enterprise data-storage services and functionality “decoupled” from the underlying hardware. In many SDS use cases, enterprises can download their storage-tiering capabilities or copy data management (CDM) solution. How SDS solved the problem was it took the power (software) away from the devices and put it back in the hands of the users. This eliminated the need for ancient NAS appliances and made it possible to run with less-expensive commodity storage hardware deployed throughout a cloud network.

Running storage management on servers as opposed to on devices translates into loads of flexibility with regards to network management. This also means that data could now be distributed to clusters of servers and storage devices throughout the network, putting data much closer to users with added resiliency. Another added advantage of SDS is if a drive fails, all its stored data is still obtainable.

Friends and sidekicks

The enterprise is full of unexpected friends, alliances, and piggy-back rides, and flash and SDS are a great example of two technologies that are both experiencing accelerated adoption rates while enhancing each other’s abilities. Before SDS, the deployment of flash storage was somewhat cumbersome, and moving datasets to and from flash storage had to be done manually, slowly, and sometimes painfully. That, plus the added price tag, and flash was definitely not your first choice for storage.

However, add intelligent, automatic SDS functionality to the mix and moving data around as per your specifications becomes a breeze, and you begin to see the benefits of flash. Those advantages being mainly in performance, since price seems to be a contentious issue. Though flash looks expensive when you look at it from a dollar per GB perspective, things get complicated when you throw in words like deduplication , data compression, and thin provisioning, which are built-in data reduction technologies that multiply your flash usage capacity without adding cost.

Hybrid flash

A hybrid setup that uses both flash and conventional storage like disks and tapes is a great way to give your apps the performance when they need them while also giving your budget a break on the stuff that doesn’t need juice and can sit on tapes. While combining the tiering and virtualization capabilities of SDS with traditional storage devices like hard disks does extend both life and capability, a small investment in flash does wonders for your overall speed and really brings out the best in SDS.

Perfect symbiosis

Flash storage

Now, this isn’t an example of an enterprise piggy-back ride as flash solves a lot of problems for SDS, too. One of those problems being the bottlenecks that occur when large datasets that rely heavily on “metadata” file system are moved around the globe. Moving metadata stores to flash not only eliminates this problem but almost every other problem related to speed and latency along with it. For example, in almost all datacenters, the number of copies of individual datasets is staggering, to say the least, and copies are continually made and kept for regulatory purposes as well as for application development and testing. Every copy uses up storage capacity to the point where more storage is used to store copies than for the production sets themselves. Managing copy data and making it more efficient and less costly is a capability now available as an SDS solution. When flash is added to the copy data management mix, the entire process from provisioning new files to tracking down old ones becomes significantly quicker.

Flash in 3D

3D-NAND technology takes a different approach to increasing the density of NAND chips by layering cells on top of each other and creating a 3D structure of cells that allows scaling to occur in the third dimension. Just within the past few months, we have seen two announcements from flash array vendors who are moving to support 3D-NAND technology in their products.

Pure Storage announced that 3D TLC NAND would be supported in their FlashArray//m platform from 1Q2016. The new drives will increase capacity from around 30TB-40TB to 60TB per rack unit (depending on the model) with pricing around $1.50/GB (effective capacity).

HP Enterprise has also announced that the 3PAR platform will also support 3D-NAND drives, although it’s not clear whether these will be TLC. Availability is from December 2015 and again, HPE are quoting a $1.50/GB (effective) price point. HPE also recently wrapped up the acquisition of data storage provider, Nimble Storage Inc., that specializes in predictive flash storage.

Customers want storage to be cheap, and the fact that flash is slowly becoming cheaper than disk is a selling point. With the increase in microservice architecture and the demand for frequent application updates, more and more applications will be created and deployed in containers. The scale out and distributed nature of modern applications will demand more and more capable storage facilities moving forward and SDS with some flash in its heart might just be the answer.

Photo credit: DC Comics

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top