Just say NO to A/A

Introduction

Is e-mail a critical application? The answer is a firm YES!

In today’s web dependent World, messaging solutions in general are considered vital for business, as e-mail is adopted as the primary mean of communication to partners and customers. Once classified as a mission-critical application, downtime becomes unacceptable, because it represents money losses.

Server reliability has suffered great improvements over the last years. Servers are more robust and reliable, technologies from the mainframe world have been incorporated and new ones have been developed. Out-of-the-box modern servers will probably provide you two nines (99%), or more, of uptime. But then you want that additional 9 on your uptime. That’s when you start thinking about clustering.

Disadvantages of Active/Active clustering

One of the most controversial issues is whether you should use Active/Active (A/A) configuration on your Exchange cluster.

Clustering is an expensive technique. If you are using it or, at least considering to implement it, it’s because you can’t afford downtime. You just have to be objective to convince your boss to approve that huge budget to buy that state-of-the-art cluster: the estimated uptime you’ll gain for using cluster technology is higher than the losses due to downtime, during the expected equipment lifetime.

That brings us to the main subject of this article: whether you should use or not an A/A configuration with your Exchange Server cluster. Of course this discussion only applies to 2-node cluster, since that for bigger clusters you must always have a passive node.


Figure 1:  A/A Exchange cluster

Microsoft recommends Active/Passive (A/P) configuration, mainly due to scalability, virtual memory fragmentation and performance issues. During the next paragraphs I’ll try to explain a little bit more why you should accept this recommendation.

1. The main goal of having a cluster is not performance, it’s high availability

Stop a minute to think: why are you using cluster? Because you want high availability and fault-tolerance, right? Well, in case of a node fail, you’ll have 2 Exchange instances on one node. If you don’t have the proper hardware, your servers will not respond well to client requests. Unless of course, if you bought twice the power you really need (and even then there are some other limitations, as we’ll see ahead), but then, why don’t you just buy cheaper servers and make a 3 node cluster?

2. Virtual memory fragmentation

In highly-scaled clusters, the virtual memory requirements of bringing a second Exchange Virtual Server (EVS) online on a node which has already another Exchange instance up and running can lead to larger than normal amounts of virtual memory fragmentation.

Think of virtual memory fragmentation as being almost like hard disk drive fragmentation. Virtual memory fragmentation occurs when there is enough virtual memory for a process, but none of the virtual memory blocks that are available are of a significant size. The store process is the main responsible for this behavior, since store.exe, as you probably know, will grab as much memory as it can possibly get (this behavior is actually a normal and expected operation).

Obviously in an A/A situation the worst case scenario is when two independent Exchange Virtual Server (EVS) instances reside on the same node (during failover, upgrades, maintenance, etc.). Since there can only be one store.exe process running per node in a cluster, each Exchange Virtual Server is going to have its very own instance of Extensible Storage Engine (ESE) inside of the same store process.

Eventually, the continuous allocation and release of various sizes of memory within a process will cause the virtual address space to become so fragmented and lead to a complete failure of the Exchange Virtual Server. We need at least a 10MB block of contiguous virtual memory in order for that EVS to successfully come online on the other node. If the other node can’t provide that contiguous memory chunk (it is possible that it’s also experiencing virtual memory fragmentation), that EVS will not come online and will stay in a failed state.

Multiple EVS on a single-node is a scenario that cannot happen in a two-node A/P cluster, therefore it eliminates one further potential point of failure and performance problems.

3. Fastest failover time

Because the passive node in the cluster sits idle until the active node fails or needs maintenance, this configuration has the fastest failover time.

You want high-availability remember? Can you really afford those extra minutes?

If you have previous experience with large Exchange cluster systems, you know the failover time can constitute a significant amount of time. This happens even when the failover is for a passive node, so it’s not hard to imagine the amount of time will increase for an active node.

4. Maximum number of simultaneous connections

In an A/A configuration there’s a limit of 1,900 simultaneous MAPI connections for each physical node, assuming that both Exchange virtual servers (EVSs) are not running on the same node. Remember that each user may have more than one connection, so the number of mailboxes may not be an accurate measure for this limit.

If one node of the cluster is unavailable and if both EVSs are online on a single node, the scaling limit remains at 3,800 connections for the two EVSs. According to Microsoft (Q815180), this scaling limit exists to make sure that failover into a node that is already running an instance of Exchange is successful. This constraint does not apply when there is no other node available for failover, which is the case for an A/P configuration.

5. CPU Load must be under 40 percent

The average CPU utilization for each node should not exceed 40 percent. The threshold for high processor utilization is around 80 percent, so if the two Exchange instances are running in the same node, they must not reach that value or we’ll have a CPU bottleneck.

6. Exchange 2003 is limited to four storage groups per server

Although this limitation is not specific to A/A configurations, it is in this particular configuration where it has more impact. This limitation is a physical one and applies to each node of a cluster as well, mainly because there can be only one store process running. Even a stand-alone server cannot have more than four storage groups mounted and active at a time. No matter how many Exchange virtual servers are failed over to a single node, the store.exe process can mount no more than four storage groups.

Exchange Virtual Server

State

Storage group names

Node 1
EVS1

Active

storage group 1, storage group 2

Node 2
EVS2

Active

storage group 1, storage group 2, storage group 3, storage group 4

Table 1A two-node active/active Exchange 2003 cluster with too many storage groups

In Table 1, the Exchange cluster includes six storage groups. If EVS2 on node 2 fails over to node 1, there will be an excess of two storage groups over the four-storage-group limitation for a single cluster node. As a result, EVS2 does not come online on node 1 and it will try to failback to node 2, if that node is still available.

In active/passive you still have the four storage groups limitation, but the problems will just arise in active/active configurations. Because of this, and in the event that you’ll still go for an A/A scenario, don’t forget to monitor the number of storage groups in your cluster.

7. Administration costs

Your administration costs will, of course, rise. This is pretty obvious, two Exchange Virtual Servers require roughly twice the time to manage than just one. Consolidation has become a buzzword, the trend is to reduce the number of production servers, so just follow the rest of the IT world.

Deconstructing the myths

Next, I’ll try to deconstruct some myths and fallacies around the concept of active/active clustering with Exchange:

Better performance: some people claim that distributing the load between the 2 nodes will result in better performance. The performance gains would be derivate of increasing disk I/O, since you’d be using 2 HBAs, and more memory, because you’d have 2 store processes running on separate nodes. At the CPU level there would be no gain, since A/A requires that processor utilization should not exceed 40 percent. Well, as I wrote before, your goal is high availability, not performance. If you size your servers correctly I really doubt your users would notice any improvement. But I bet they would notice the longer time to failover!

More expensive (licenses, stopped hardware). The supporters of this position don’t feel comfortable in having a stopped (expensive) machine. Besides, you must pay extra licenses. Well, clustering is an expensive technique, remember? High availability has a price. Regarding licensing, we must hope that Microsoft will apply the SQL model to Exchange: a license for each node that is effectively running.

Conclusion

Although active/active clusters are supported, the preferred and recommended configuration for an Exchange Server cluster is an active/passive configuration. Period!

If after reading this article you are still not convinced, at least consider using A/A/P instead of A/A. Or rethink your high-availability needs based on your available budget. As I said, server reliability is pretty good these days, so maybe you can achieve your goals by using just one server and following best-practices.


Figure 2: A/A/P Exchange cluster

There are some articles from Microsoft that cover the issues discussed here, so I’ll leave you with the links:

“Using Clustering with Exchange 2003: An Example”,
http://go.microsoft.com/fwlink/?LinkId=23460

“Planning an Exchange Server 2003 Messaging System”, http://www.microsoft.com/technet/prodtechnol/exchange/2003/library/messsyst.mspx

“Considerations when deploying Exchange on an Active/Active cluster”,
http://support.microsoft.com/?kbid=815180

“Microsoft Exchange Server 2003 Technical Reference Guide”, http://www.microsoft.com/technet/prodtechnol/exchange/2003/library/techrefgde.mspx

“Support WebCast: Clustering Microsoft Exchange Server 2003”,
http://support.microsoft.com/?kbid=823894

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top