Although I intend to expand on this in a full-length article at some point, I wanted to share with you a recent experience that we had at Westminster College as we transitioned from our old SAN, which was iSCSI-based, to our new SAN, which is Fibre Channel. I’ll start by saying that I didn’t go into the SAN decision requiring a Fibre Channel solution, but the ultimate option we chose moved us down that path. Although I’ve had relatively little experience with Fibre Channel, getting it up and running was really pretty easy.
Let me start by providing to you a look at our old baseline:
- iSCSI SAN – two management modules with redundant paths.
- Dell M1000e blade chassis with 6 x M3220 Ethernet blades in the back.
- Four VMware ESX hosts, each with 6 Ethernet ports spread across three NICs. Each port maps to an Ethernet switch module in the back of the chassis.
Further, as a part of this process, I also made the decision to replace an existing ESX server with 32 GB of RAM with a new unit with 96 GB of RAM. Believe it or not, it was much less expensive to buy a whole new server than it was to just upgrade the RAM in the old server. To add a icing to the cake, the new server has dual six core processors while the old unit used quad core processors. So, we got 64 GB more RAM and 4 more processing cores for less than the cost of a RAM upgrade.
We faced some challenges, though. First of all, in order to make the move to Fibre Channel, we needed to replace one of the Ethernet cards in the existing servers with a Fibre Channel card and also replace two of the Ethernet switches in the blade chassis with Fibre Channel blades. The M1000e chassis does allow for hot swapping of the chassis blades, but you can’t add a Fibre Channel blade when any of these individual servers still have an Ethernet card on that particular port. So, the first order of business was running through the servers and removing the third Ethernet adapter.
Obviously, we wanted to do this in a way that kept services up. So, we started vMotioning… a lot. I first put into place the new 96 GB server and added it to the ESX cluster after applying an appropriate host profile to the server. We started with the first server and moved it into maintenance mode and allowed vCenter to evacuate the server for us. Once that process was complete, we pulled the server from the chassis and removed the third Ethernet adapter.
With the first server, we ran into a problem. The original intent was to pull the Ethernet card and simply replace it with the new Fibre Channel adapter. When we tried this, the server wouldn’t boot. Upon further investigation, we determined this: If ANY of the servers in the blade chassis still had Ethernet cards in the slot we were intending for the Fibre Channel adapters, the server would not boot. According to Dell, this is by design. So, we revised our plan. We pulled the newly installed Fibre card, brought the server back up and exited maintenance mode, thus putting that server back into production. We then moved on to the other three servers and, one by one, moved them to maintenance mode and removed the third Ethernet adapter.
Once that was complete and we had all of the Ethernet adapters removed, we installed the Fibre Channel blades in the server chassis. Then, we started the process of running back through the four ESX servers, moving each one to maintenance mode, bringing them down, installed the Fibre Channel adapter, bringing them back up and exiting maintenance mode.
Once all of the servers had their Fibre Channel adapters, I connected the new SAN to one of the external ports on the new Fibre Channel switches. Next, I created zones on each of the Fibre Channel switches. Finally, I configured the vSphere hosts to be able to see volumes on the new SAN.
To complete the migration process, I created new VMFS volumes on the new SAN and then made liberal use of storage vMotion to migrate each virtual machine – about 50 in all – to the new SAN.
In all, this process took about two days to complete. I ran into minor problems migrating one virtual machine which I was able to mitigate by bringing the machine down and moving it during a longer maintenance window.
My intent here was to provide you with a quick view of the process that I undertook to migrate the College to a new SAN and to replace an ESX server. In a full article in the future, I will provide much more detail.