Some people like to tinker, others like to shop. When you’re looking to add some local network storage for your business network, which do you choose? Would you prefer to purchase an off-the-shelf ready-made network attached storage (NAS) device and just hook it up to your network and turn it on? Or would you rather roll your own do-it-yourself solution by buying a server system chassis, a RAID card, and a bunch of hard disk drives and/or solid state disks and after mixing everything together install Windows Server as the icing on the cake?
Wait a minute — why would we even want to use local network storage anymore? Why not just go for cloud storage?
Why local network storage still matters
Everyone seems to be pushing us hard nowadays to move everything into the cloud. We use the cloud for storing everything from our personal photos to that novel we’ve been trying to write for years. But do I really want to keep my company’s sensitive business documents in the cloud? In the end it’s a business decision whether you trust the privacy, reliability, and availability of cloud-storage services for storing the lifeblood of your business. And lots of smaller businesses still prefer to keep their sensitive data stored locally to ensure its privacy and availability to the company at all times. Even larger enterprises can struggle with what can or can’t go into the cloud, mostly because of various regulatory requirements they must adhere to.
So let’s assume for now that local network storage is the way to go for your business. What’s the best way to approach implementing such storage: DIY or commercial products? I polled some IT pro colleagues concerning this and their stories contain lots of insight gained from hard-earned experience.
When performance matters most
Nat Garrison has been a self-employed computer systems consultant for as long as he can remember. “I’ve tried a lot of NAS solutions over the years: Snap Servers, Intel SS4000-E, Intel SS42000-E, various LaCie, Synology, and QNAP devices. I’ve been happy with all of them except for the speed of the devices. They have all come with low-powered CPUs and not a whole lot of RAM and for very small groups using them they have worked OK. If you want high performance you need to build something yourself.
“For example, I have a Windows 2008 R2 Server that has four 600 GB Western Digital 10,000 rpm VelociRaptor mechanical drives in a RAID 10 array. It is fast, but it has a very expensive OS. I have about 240 GB of data that I do a full backup of each night. The past year of so I was backing up through my gigabit network to an inexpensive single drive 2 TB LaCie NAS. The full backup was taking about 7 1/2 hours each night. I now have a custom-built NAS using an InWin IW-MS04-01 mini 4-bay server chassis, a gigabyte GA-H97N motherboard, an Intel i5-4690S CPU, 16 GB of RAM, an Intel 120 GB DC S3510 SSD OS drive, Seagate 8,000 GB NAS HDD data drive, and iX Systems’ FreeNAS. The nightly backup to this NAS only takes 2 1/2 hours.”
While Nat clearly prefers the DIY approach for his own personal storage needs and has built similar solutions for some of his customers, he also says, “I really like the FreeNAS and the OS that comes with Synology and QNAP systems.” So he’s clearly not opposed to commercial NAS products out of any sort of philosophical bias.
By way of contrast, Michael Martell, who is a systems manager for a publishing house, takes a somewhat different view concerning the performance differences between commercial NAS and DIY solutions. “We have used a DIY strategy here for some lower-demand purposes, such as backup of noncritical systems and archiving of unlikely-to-be-needed data, such as mailing details for a catalog mailing, or page/image data for them. It has generally worked to our satisfaction for the purposes we needed. I recommend Starwind’s software or a Linux system approach to manage the storage.
“The one caveat is that you will find it hard to match the speed and reliability of the commercial systems which have been optimized for responsiveness, and may not have the easy tiering and SSD caching or pass-through of the commercial entries. We found that even when pushing to optimize systems, off-the-shelf hardware doesn’t match the speed of the more thoroughly optimized systems. If you are looking for fault-tolerant, large volume and low-cost storage, you can do well. If you are just looking for speed, remember the old Rule of Thirds: good, fast, cheap — pick any two and pay for it with the other!”
It just depends, really
Tony Gore is an independent consultant based in the UK and is a founder of Risk Reasoning, which provides collaborative risk assessment and risk management tools. For Tony, the question of DIY or not is one he has dealt with several times in the past.
“I looked at this a few years ago and revisit it from time to time, and it really depends on what you want to do and the setup you have. Originally when Microsoft’s Small Business Server was coming to an end, this was a problem for small businesses that I supported at the time. To replace the central server with a NAS required some means of authentication. It can be done running LDAP on the NAS, but you then need pGina on all your Windows machines as logon clients. Once you can get it to work, it is powerful, but my conclusion was that it was less than ideal and nothing like the ease of use of Active Directory.
“So I opted for a cheap, simple server running Windows Essentials Server. This supports up to 25 device (PC, phone, tablet) logins. I got a small HP microserver for around $250, put an enterprise SSD for the OS, and mirrored large disks for the data storage and client backups. Extra storage (and for server backups) is provided by a Synology NAS that can be integrated into Active Directory. Synology’s NAS’s also work well with MACs and have a series of user and management apps for Android and iOS and secure remote management. This might not be quite the cheapest solution, but it has proved robust, flexible, and easy to manage. I have also integrated the Essentials Server with Office 365 to take email management offsite; some customers use the cloud storage and others do not. I also do backups of some critical data from the server/NAS to the cloud.
“I find this setup provides, for a small business, maximum flexibility and the ability to organize how I want according to needs. For example, one customer has large amounts of archive stuff from previous years. That lives on the NAS, configured read-only for everyone except the admin, with two hard copies on DVD.”
Tony also highlighted a caveat one should be aware of when considering commercial NAS products like those from Synology. “One thing I should add about the Synology NAS is if the OS gets corrupted and you have to reinstall, your data is always on a separate partition. And if you have backed up the settings, there is not much reconfiguration to do to get everything back as it was. In the early days, Synology rarely updated the NAS software; now it is about once per month because of security issues. Synology’s DSM is based on Linux and these days all software needs to be kept up to date. If you do a home-brew system, you effectively take on that responsibility yourself. Even for a two user setup, I find this uses less of my time than if I built my own, and I have a much more robust system for when things go wrong. So it really comes down to whether you cost your own time for learning and maintaining.”
Doesn’t it always boil down to just that: time and money?
DIY is NFM
Let’s end by hearing from Bill Bach, who is president of Goldstar Software and specializes in support and training for the PSQL database community. As we’ll see, Bill basically takes the position that “Do it Yourself” is just “Not for Me.”
“We moved away from Windows File Servers many years ago, finding that the pre-built NAS solutions are vastly superior. We’ve played with several vendors’ solutions in that time period, and all have worked well, but all have had at least some issues. For example, our first NAS was a Buffalo LinkStation, with just a single 1 TB drive in it, which is still running well, albeit at only 100 Mbps. Because it lacks redundancy, we use it as a tertiary backup of other critical data from our website and file servers, but it still works very well for that purpose.
“We replaced the LinkStation with a in production with four 1 TB drives, and had very good results with that, too, especially as it ran on GbE. However, we (literally) blew through three motherboards because the USB3 ports on this unit could not handle the power requirements of the external USB3 disk drives that we were using for backups, and while they would be fine for the first backup or two, the USB3 ports would die very soon thereafter. This machine is also still functioning, but has also been relegated to serving as a backup target.
“For our third and fourth NAS units, we switched to NetGear, with a ReadyNAS 500 Series unit (with six 4 TB drives in a RAID6 config) and a ReadyNAS 312 (with two 4 TB drives in a RAID1 configuration). These machines are both GbE-connected and have performed extremely well for our needs. Of course, nothing is perfect. These two devices integrate with AD, but my AD environment has some strange anomaly to it — probably because it was dragged forward from Win2000 all those years ago — which means that the administrator account in my domain is unable to create or write to any file or folder on these NAS units that consists of a single character and no extension (like C). They tried for a while, but were never able to figure out what was wrong. So, I just went into my development folders and renamed C folders to CPP and the problem is easily avoided. I also had a series of drive failures on the RN500 last year, where four of the 6 WD drives suddenly needed to be replaced. Sadly, during all of this, even though I never had more than one drive fail at a time, there was one point where the entire RAID6 array completely tanked itself, and the RN500 couldn’t recover at all. I had to manually delete each of the six drives from the array, rebuild the array from scratch, and then restore all of my data. Ugh. Since then, though, I have had no ongoing issues.
“With all of these problems, why use a NAS? First off, NAS data storage is cheap, and using devices dedicated to the task is certainly better than forcing it onto a Windows server already serving other tasks. Second, with 90 percent of the users accessing only the NAS, the users who need to get to our true Windows application servers is limited, thus saving on license costs. Third, the backup capabilities afforded by the NAS devices (including snapshotting, replication, and more) are superior to that offered in a general-purpose OS. Fourth, if I ever do decide to push data into the cloud, even for a backup, these solutions are ready for me. Finally, the power that these systems use are far better than a full-blown Windows server, and this also helps keep the heat down in our server room, too.
“Would I ever build it myself? Probably not. If you want to save a bit, buy a solid unit from a good vendor and populate it with your own drives. I found that the warranty I got on standalone drives is better than what the NAS vendor offered. However, your data is your data — this is not a time to go cheap with a DIY project. You want constant OS updates to deal with security threats, and you may want hardware support as well.”