An introduction to the world of storage (Part 1)

If you would like to read the next part of this article series please go to An introduction to the world of storage (Part 2).


Storage can be pretty intimidating to those that don’t work with it every day or those who are just getting started. This article is meant to be a bit of a primer to describe different kinds of storage and what you can do with it. Storage has gotten really exciting in the past few years, though the basics can be a little bland. These foundational elements are essential to anyone working within a data center, though.

Different types for different needs

There are essentially three different types of storage you’ll find in the datacenter: DAS, SAN, and NAS. DAS, or Direct Attached Storage, is storage that is directly attached to a physical server with no networking devices between them. Some people refer to local storage, or storage inside the server, as DAS as well, but it really is some sort of enclosure attached to one specific server. SAN, or Storage Area Network, is a network dedicated to storage. There could be one or multiple arrays and switches or routers within this network. It refers to block level storage which generally refers to fibre channel, fibre channel over Ethernet, or iSCSI attached storage and we’ll get into that in a minute. NAS, Network Attached Storage, refers to file level storage such as NFS or SMB. A NAS will still use the network to communicate between a host and storage device, but again, it’s file level storage not block level.

Block vs. file

This leads us to a discussion on block level storage and file level storage, and just to confuse things even more we now have object level storage. Let’s start with block and file since they’re a little more well-known. Block level storage takes raw blocks and lets you format them any way you like. For instance, you can format them with NTFS to create a Windows volume or VMFS to create VMware datastores. As I said above this generally includes using fibre channel or iSCSI as the communications protocol. Fibre channel is a high speed connection that uses light to transfer the data. It can be fragile because if the light doesn’t terminate from end to end due to a kink or something in the line. It’s usually a cable that looks like the picture shown below along with a fibre channel switch containing multiple SFPs.

Figure 1

There’s also FCoE which can use network cables or twinax cables as shown below.

Figure 2

iSCSI uses regular network cables and networking devices to transport data to and from your storage system. Generally you get a little better speed and performance from a block level system. A file level storage system files are stored on the NAS using something like NFS or SMB. A great use case for this would be is a file share or a place to store Veeam or PHD Virtual backups or even just file shares for users.

Object storage

Let’s talk a little bit about object storage. Object storage manages data as objects…that clears it up, right? Basically while file systems use a sort of hierarchy and block systems are broken down into sectors, object storage has a flat organizational architecture. This means that it’s highly scalable so cloud solutions like Amazon S3 and OpenStack use it as a simpler way to provide servers and storage in the cloud. It also uses less metadata which makes it’s a little easier to use and understand, although certain features are not possible yet. It can be slower than more traditional architectures as well.

Disks and their uses

Now that we’ve discussed storage systems and briefly about how we connect them let’s talk about what we put in them. There are now a few different kinds of disks we can put in our disk enclosures depending on what we’re looking for, usually speed, performance, and/or capacity. The more traditional disks are SATA and SAS. SATA is used for capacity because it is relatively inexpensive, but that also means it’s not very fast. You’d want to use SATA drives for data that usually just sits there. You could use it to hold backups or files that don’t get used or changed that often. SAS disks are faster, usually running at 10K or 15K RPMs. Until pretty recently this is the only thing you’d use for high IOPs, such as databases or server disks used with demanding applications. More recently we have NearLine-SAS disks, which are basically the same as SATA disks, but have a SAS interface making it easier to use in a modern storage array. In fact, generally what’s sold now in an array are SAS and NL-SAS disks.

SAS disks are no longer the top of the performance tier, though. We now can include Flash or SSD within our modern storage arrays. Flash and SSD are way more expensive, although they have come down in price over the last few years. There are even many companies making all flash storage arrays for really demanding applications. A common use case for an all flash array is virtual desktop infrastructure (VDI). There are also several companies putting out hybrid arrays that usually contain some flash disks and some SAS disks. SSDs/flash don’t have any moving parts in them, meaning there are no spinning plates. So, they can be considered more reliable, however they do have a shelf life which depends on the kind you get.

Tiering to differentiate based on performance needs

Some storage arrays do what is called multi-tiering. The EMC VNX2, for example, is capable of containing flash, SAS, and NL-SAS. Depending on how you set up your storage pools it’s possible to start the data in your flash disks and then the VNX will decide how often it’s used. If it’s used often it may stay in flash or maybe get moved down to SAS. If the data is used very infrequently it will move down to NL-SAS. The cool thing about this is that if you have data that is only used seasonally, for instance, it will stay in the NL-SAS disks until it’s used a lot again, in which case it will get bumped back up to SAS or flash. The original VNX would also do this, but it has been improved in the newer version as flash has become more common.

SDS and the future

One last note on the Software Defined Storage. I’m sure everyone has been hearing about the idea of a Software Defined Data Center or Software Defined Storage lately. Without going to deep, this basically leads to commoditizing storage you may have or need to purchase. By the use of software from various vendors, we can take local storage, DAS, SANs, white box servers, etc., etc. and abstract the management and usage. An example of this would be EMC ViPR or VMware VSAN. Thereby making the actual hardware perhaps less important. There are many arguments for and against this as well as whether it will actually work as well as our traditional storage arrays.

This article gives us an idea of what things are without deep diving on each storage term. This is by no means a substitute for reading the guides for the particular array you own or are planning to purchase. In the next storage basics blog we’ll discuss some basic architecture using the above terminology.

If you would like to read the next part of this article series please go to An introduction to the world of storage (Part 2).

Leave a Comment

Your email address will not be published.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top