Considerations for Multi Site Clusters in Windows Server 2012 (Part 3)

If you would like to read the other parts in this article series please go to:

Throughout this article series, I have been discussing the prospect of building a multi site cluster without the use of shared storage, which is something that can be done using Windows Server 2012. The benefit to avoiding the use of shared storage is that by doing so, you eliminate the possibility of the cluster shared volume (or the connectivity to it) becoming a single point of failure. Each cluster node has its own local storage that it can use in the event of a failover.

As I explained in the previous article in this series, the trick to making the cluster work is to make sure that each cluster node is configured to use the correct disk. The process of doing so actually begins when you run the Create Cluster Wizard. This wizard’s confirmation screen contains a check box that is used to add all eligible storage to the cluster. The check box is selected by default, so any storage that is visible to the cluster nodes that also meets the clustering prerequisite requirements will be added to the cluster.

After the cluster has been created, you can open the Failover Cluster Manager and navigate through the console tree to Failover Cluster Manager | <your cluster> | Storage | Disks. Upon doing so, you should see a disk for each cluster node. Only the local disk will appear to be online, because the local cluster node cannot access storage that is directly attached to the other cluster nodes. In fact, when you use the High Availability Wizard to add a clustered resource (such as a clustered file server), you will only be able to add the cluster disk that the server sees as being online.

To make the replicated storage work with the cluster you must make the other node’s cluster storage available to the cluster. Each node needs access to its own disk in the event of a failover. The first thing that must be done in order to make this happen is to add the additional disk storage to our clustered resource. To do so, open the Failover Cluster Manager and navigate through the console tree to <your cluster> | Storage | Disks. Next, select the disk that is currently being displayed as Offline and then click on the More Actions command, which appears beneath the Actions pane. When you do, a sub menu will display several choices for what you can do with the disk. Select the Assign to Another Role command.

At this point, Windows will display the Add Resource to a Role dialog box. This dialog box lists all of the clustered resources that exist within your failover cluster. Select the clustered resource with which you wish to associate the disk and then click OK. Upon doing so, the disk will be assigned to the role that you have selected. However, the Failover Cluster Manager will still report the disk as being in an offline state because only the cluster node to which the disk is directly attached can access the disk.

As it stands right now, each cluster node has its own direct attached storage and each cluster node’s disk has been associated with the clustered resource. The problem with this is that the current configuration leads to a clustered resource status of “Partially running”. This happens because only one disk is accessible to the current cluster node. The clustered resource’s configuration currently indicates that every disk in the cluster must be accessible in order for the cluster to be in a healthy state. Of course this will never happen because each disk is only accessible to a single cluster node. There will never be a time when a single node will be able to access all of the clusters disks.

The key to fixing this problem is to reconfigure the clustered resource in a way that allows the cluster to be considered healthy even if only one of the cluster’s disks is accessible. Remember, in this type of configuration it is normal for only one disk to be accessible to each cluster node.

To change the clustered resource’s configuration, open the Failover Cluster Manager and navigate through the console tree to <your Cluster> | Roles. Upon doing so, the console’s center pane will be split into an upper and a lower section. The lower portion of the console lists the servers that are being used by the clustered resource, the clustered resource’s name, and the clustered resource’s storage. Right click on the listing for the clustered resource (in the lower pane) and then select the Properties command from the resulting shortcut menu. You should now see a properties dialog box for the clustered resource.

Select the dialog box’s Dependencies tab and then click on Insert button. When you do, you will see a drop down list appear just above the first cluster disk. Select the second cluster disk from this drop down list. Now, repeat the procedure for any additional disks that the clustered resource might be using.

As you populate the Dependencies tab with the various disks that your clustered resource is using, you will notice that the word “And” appears in front of all but the disk that is listed first. The reason why this happens is because there is an “and” relationship between the disks that are being used by the clustered resource. In other words, if the clustered resource is to be considered healthy then the first disk AND the second disk AND any additional disks must all be accessible.

Being that our goal is to reconfigure the cluster so that each cluster node only needs to be able to access its own direct attached storage in order to be considered healthy, we need to get rid of the AND dependency. The best way to do this is to select the first disk related instance of AND and then use the drop down arrow to change AND to OR. By doing so, you are effectively telling the clustered resource that it is in a healthy state so long as either the first disk or the second disk is accessible. You must of course change AND to OR for any additional disks that have been assigned to the clustered resource. Click OK to complete the process. You should now see the clustered resource’s status change from Partially Running to Running.

Conclusion

The procedures that I have outlined in this article series so far will allow you to use Windows Server 2012 to create a multi-site cluster that does not require any sort of shared storage. Keep in mind however, that the procedure that I have discussed assumes that data replication is occurring at the hardware level. The cluster cannot function properly unless you have some sort of mechanism in place to replicate data among the disks within the cluster, thereby ensuring that each cluster node has an identical copy of the data.

Even though you can theoretically use the cluster as it is right now, there is some cleanup work that should be done in order to ensure that the cluster functions as smoothly as possible. I will discuss these cleanup tasks in the next article in this series.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top