High availability has always been critical for e-mail services as it has always been one of the most critical services within the enterprise organization. In today’s world, where end users connect to their mailbox using a minimum of two devices to send and receive a large amount of e-mails, the expectations to the messaging service has never been higher. Said in another way, the system must be available 24/7/365 and usually has a Service Level Agreement (SLA) that defines at least three nines (99.9%) of monthly service availability.
Over the years, the Exchange team has done tremendous work which means that Exchange Server now includes some pretty amazing logic that helps you keep the service highly available using native features such as database availability groups (DAGs), Safety Net (formerly known as transport dumpster), managed availability and modern public folders.
However, as most of us know, Exchange Server expects other services and components to be available for it to do its job. That is network, Active Directory, Windows Server, hardware and/or the virtualization layer needs to work and respond properly. When it comes to Exchange protocol and service access though, we need to ensure that the respective Exchange protocol and/or service respond accordingly. This is where load balancing comes into the picture. What good does it do if we have multiple Exchange servers deployed, but no solution to:
- Distribute incoming traffic among the available Exchange servers in an even fashion
- Ensure that when one or more servers are down, the incoming traffic goes to the available servers
Yeap that would be bad right? We need a solution in front of Exchange that helps us with this.
This is by no means a technical article that explains load balancing in-depth or goes through how you configure load balancing for Exchange Server using a specific set of step by step instructions. If you are searching for this kind of article, please see this article by one of my co-authors (Steve Goodman). His article talks about how you load balance Exchange Server 2016 using a LoadMaster device from Kemp Technologies. I also did a pretty extensive one for Exchange 2010 back in 2010, but as you can guess a lot has happened since then, both when it comes to Exchange Server but also the load balancing solution itself.
A Quick Trip Down the Exchange High Availability Lane
Back in the Exchange 2003 days and earlier, most enterprise organizations either did not load balance incoming traffic at all or they used the native Windows Network Load Balancing features included with Windows Server. Heck, many enterprises didn’t even protect the databases using the Windows clustering component by creating Exchange clusters.
With Exchange Server 2007, we began to see a heavier focus on high availability. Exchange Server 2007 introduced cluster continuous replication (CCR), single copy clustering (SCC) and later standby continuous replication (SCR) and the designs were now written with focus on local availability within a datacenter and site resilience for those enterprises that hava multiple datacenters.
This was also the era where enterprises began to take load balancing more seriously. Many simply deployed dedicated client access servers (CAS servers) and configured these with Windows network load balancing (WNLB) though.
I remember the complaints the Exchange team had from the small and medium sized customers that suddenly needed more Exchange servers in their solution as they could not combine WNLB and the clustering components on the same set of servers. Oh those were (as a consultant) funny times!
Other customers started to invest in a hardware based load balancing solution from a third-party vendor or enabled a load balancing module on their existing network devices or application proxy servers. For external access to Exchange protocols and services, many customers used ISA Server (later rebranded to Forefront TMG) to publish Exchange to the Internet. But as you may recall, back then TMG was not suited as a load balancer for internal clients accessing Exchange. TMG was a perimeter firewall. The biggest issue was that TMG wasn’t capable of load balancing RPC traffic which we used for internal Outlook desktop clients at the time.
Exchange Load Balancing Nowadays
Exchange Server 2010 required session affinity (a relationship between the client and a specific CAS server) at the load balancing layer for most Exchange services, which meant that the load balancing solution needed to be configured using layer 7 based load balancing (aka application-level load balancing).
With Exchange 2013 and onwards, session affinity is no longer required and RPC access from an Outlook desktop client to Exchange was removed. Now all clients are required to connect using RPC over HTTP(S). Historically, the Outlook desktop client on the internal network always connected to Exchange using MAPI over RPC. With Exchange 2013, both external clients (Outlook Anywhere) and internal clients had to connect to Exchange using RPC over HTTP(S). With Exchange 2013 SP1, the new MAPI over HTTP(S) protocol was released and Exchange 2013 customers could enable this new protocol to improve the reliability and stability of the Outlook and Exchange connections by moving the transport layer to the industry-standard HTTP model. With Exchange Server 2016, MAPI over HTTP(S) is enabled by default. The same goes for Outlook clients connecting to mailboxes in Exchange Online.
By moving away from RPC connections to Exchange and removing the session affinity overall, the load balancing aspects of deploying an Exchange server solution on-premises has become much simpler seen from the Exchange Server product perspective. However, if you choose to load balance Exchange using pure DNS round-robin (which BTW is supported), you may most likely get a big headache operations-wise as there are several drawbacks to this approach (biggest one being no health check/probes for any protocols).
As an Exchange consultant, I have dealt with many different load balancing solutions over the years. All from load balancing modules, enabled on Cisco network devices, Citrix NetScaler, KEMP LoadMaster and BigIP from F5 Networks. I have dealt with both hardware based load balancers and virtual appliances running on VMware, Hyper-V and even Azure IaaS. I have always tried to steer customers away from load balancing solutions, which as a minimum isn’t listed on this Microsoft site, which describes which server load balancer partners have completed solution testing with Exchange Server 2010 (yes Exchange 2010 since it for some reason haven’t been updated since then) and contains pointers to accompanying deployment white papers hosted by the partner.
Personally, I have found that both KEMP LoadMaster and Big-IP from F5 Networks have good Exchange 2016 deployment guides. In addition, I like the fact that both also have Exchange 2016 templates that make the virtual service configuration a breeze. KEMP LoadMaster have a major advantage when it comes to the price tag though.
This concludes this article.