If you would like to read the other parts in this article series please go to:
Uncovering the new RPC Client Access Service in Exchange 2010 (Part 2)
Uncovering the new RPC Client Access Service in Exchange 2010 (Part 3)
Uncovering the new RPC Client Access Service in Exchange 2010 (Part 4)
Among the architectural changes made in Exchange Server 2010 is the introduction of the new RPC Client Access service which changes the client access business logic as we know it. This new service moves Outlook MAPI mailbox connections from the back-end Mailbox servers and directory access from domain controllers/global catalog servers in the data tier to the Client Access servers in the middle tier.
In this article we will begin with a nostalgic look at how the business logic worked back in the Exchange 2000/2003 days, where we had the concept of front-end and back-end servers. We will then talk about the improvements that were delivered with the introduction of the Client Access server role in Exchange 2007. From there we will move on to and concentrate on the new RPC Client Access Service included with Exchange 2010. We will take a look at how this new service works and how you can set static ports for MAPI connections.
Let’s get started.
In Exchange 2000 and 2003, we had a basic front-end and back-end architecture, where the front-end servers accepted requests from clients and proxied them to the back-end servers for processing. An Exchange 2000/2003 front-end server could proxy RPC over HTTP (now known as Outlook Anywhere), HTTPS (OWA, Entourage etc.), POP, and IMAP clients to the relevant back-end servers. The front-end servers also supported multiple referrals to public folder data on back-end servers.
In Exchange 2000/2003, internal Outlook MAPI clients did not use the front-end server at all; they connected directly to the back-end servers via MAPI over RPC. In fact, because the DSProxy component did not run on the front-end servers, you could not point Outlook MAPI clients to the NetBIOS name or FQDN of a front-end server.
With Exchange 2000/2003, the DSAccess component also accessed the Netlogon service on the domain controller and global catalog servers in Active Directory directly via Remote Procedure Calls (RPCs), and then Outlook clients connected directly to the DCs/GCs. Outlook 2000 and earlier connected to DSProxy.
Figure 1: Exchange 2000/2003 Front-end and Back-end architecture
One of the main benefits of Exchange 2000/2003 front-end servers was that they allowed you to configure a single namespace (such as mail.domain.com). With a single name space, users didn’t have to know the name of the server on which their mailbox was stored. Another benefit was that SSL encryption and decryption were offloaded to the front-end servers and thereby freed up at that time expensive processing power on the back-end servers. But in the end, a front-end server was really just a proxy server that did not process or render anything on its own. Instead, it authenticated and forwarded logon requests to the back-end servers, which severely suffered from a 32-bit architecture that amongst other things limited Exchange 2000/2003 servers to a maximum of 4GB of memory.
When Exchange 2007 was released, things improved significantly. The intention with the Exchange 2007 Client Access Server (CAS) role was to optimize the performance for the Mailbox server role. Unlike Exchange 2000/2003 front-end servers, the CAS role is not just a proxy server. For instance, the CAS server holds the business logic processes for Exchange ActiveSync Policies and OWA segmentation. In addition, OWA UI rendering also happens on the CAS server and not the Mailbox server. In fact, all client connections, except Outlook (MAPI) use the Client Access Servers as the connection endpoint in an Exchange 2007 infrastructure. This offloads a significant amount of the load that occurred against the back-end mailboxes in Exchange 2000/2003.
Figure 2: Exchange 2007 Client Access architecture
Exchange 2010 takes things one step further. With Exchange 2010, MAPI and directory access connections has also moved to Client Access Server role. This has been done by introducing a new Client Access Server service known as the RPC Client Access service.
Figure 3: RPC Client Access Service in the Services MMC snap-in on a Client Access server
That means that MAPI clients no longer connect directly to a Mailbox server when opening a mailbox. Instead they connect to the RPC Client Access service which then talks to Active directory and Mailbox server. For directory information, Outlook connects to an NSPI endpoint on the Client Access Server, and NSPI then talks to the Active Directory via the Active Directory driver. The NSPI endpoint replaces the DSProxy component as we know from Exchange 2007.
Figure 4: Exchange 2010 Client Access architecture
How is this different from Outlook Anywhere (RPC over HTTP) clients that connect to a mailbox in Exchange 2007? Well, although Outlook Anywhere clients connected to the RPC Proxy component on the Client Access Server, they also talked MAPI over RPC directly with the Mailbox server and with the NSPI endpoint in Active Directory.
Some of you might wonder what the benefits of the RPC Client Access service are. There are several actually. First, with MAPI and directory connections moved to the Client Access Server role in the middle tier layer, Exchange now has a single common path through which all data access occurs. This not only improves the consistency, when applying business logic to clients, but also provides a much better client experience during switch-over and fail-overs when you have deployed a highly available solution that makes use of the new Database Availability Group (DAG) HA feature which I will cover in-depth in a future article. If the Outlook client user will even notice a disconnection, it will not occur for more than approximately 30 seconds compared to disconnection in Exchange 2007 that could take several minutes, heck even up to 30 minutes if it was a complex AD topology consisting of many AD sites and Domain Controllers throughout which DNS has to replicate.
Lastly having a single common path for all data access, will allow for more concurrent connections and mailboxes per mailbox server. In Exchange 2007 a Mailbox server could handle 64.000 connections compared to Exchange 2010 which will increase that number to a 250.000 RPC context handle limit.
So now that we rely even more on the Client Access Servers within an Exchange 2010 infrastructure, clients need to be able to quickly re-connect to another CAS server in case the one they are connected to is down for planned or unplanned reasons. Say hi to the new Client Access array feature in Exchange 2010. A Client Access array is, as the name implies, an array of CAS servers. More specifically, it is an array consisting of all the CAS servers in the Active Directory site where the array is created. So instead of connecting to a FQDN of a CAS server, an Outlook client can connect to the FQDN of the CAS array (such as outlook.domain.com). This makes sure Outlook clients connecting via MAPI are connected all the time even during mailbox database fail and switch-overs (aka *-overs).
Here is how things work in regards to CAS arrays. An Exchange 2010 mailbox database has an attribute called RpcClientAccessServer. When creating a new mailbox database in an Active Directory site where a CAS array has not been created, this attribute will be set to the first CAS server installed in the AD site. You can see what this attribute is set to by running the following command:
Get-MailboxDatabase <DB name> | fl RpcClientAccessserver
Figure 5: RPC Client Access Server FQDN specified on a Mailbox database
If a CAS array exists in the AD site when you create a new Mailbox database, this attribute will automatically be set to the FQDN of the CAS array. This is so the CAS array on the Client Access server knows which Mailbox server and database a user should be directed to.
A CAS array is configured the following way. First you create the new CAS array using the following command:
New-ClientAccessArray –Name “name of CAS array” –Fqdn <fqdn of CAS array> -Site <name of AD site>
Figure 6: Creating a new Client Access array
When the CAS array has been created you should create an “A record” in your internal DNS named outlook.domain.com pointing to the virtual IP address of your internal load balancing solution.
Figure 7: Creating an “A record” for the CAS array in DNS
Note that Windows NLB can't be used on Exchange servers where mailbox DAGs are also being used because WNLB is incompatible with Windows failover clustering. If you're using an Exchange 2010 DAG and you want to use WNLB, you need to have the Client Access server role and the Mailbox server role running on separate servers. In this scenario you should instead use an external hardware-based or virtual load balancer.
If you use WNLB it is just a matter of creating the WNLB cluster and pointing the DNS record at the WNLB VIP and make sure that TCP port 135 (EndPoint Mapper) and the dynamic RPC port range (TCP 1024-65535) are added to the port rules list.
Later in this article series I will show you how to set static ports for MAPI and directory access.
If you use a load balancing solution from a 3rd party vendor, you must create rules in the LB device that round robin traffic for the respective ports.
Lastly, if you created mailbox databases on Mailbox servers in the AD site before you created the CAS array, you must change the FQDN specified in the RpcClientAccessServer attribute on these databases. You do this using the following command:
Set-MailboxDatabase <name of DB> -RpcClientAccessServer “outlook.domain.com”
Figure 8: Changing the value of RpcClientAccessServer attribute on any existing Mailbox databases
We should now see outlook.domain.com as the FQDN.
Figure 9: RpcClientAccessServer attribute set to FQDN of CAS array
If you protect the mailbox databases using a Database Availability Group, and a copy of the respective database in another AD site becomes the active one, remember that CAS servers will talk directly with the Mailbox server on which the Mailbox database is now mounted. This communication will happen via RPC as Client Access servers and Mailbox servers talk RPC. This is an important detail. If you have a complete site down, clients will not automatically re-connect to CAS servers in another site. This will instead require manual intervention. This topic deserves an article of it's own and is outside the scope of this article.
That was all I had to share with you in this part 1, but you can look forward to part 2 of this multi-part article which will be published in the near future.
If you would like to read the other parts in this article series please go to:
In this second article in our series, we will work on the Ansible Automation Engine…
Microsoft Build 2020 included several announcements aimed at developers and the IT community. Here are…
Using Azure Active Directory Identity Protection will boost your security. This step-by-step guide shows you…
COVID-19 is not going away anytime soon, and as Microsoft researchers have discovered, neither are…