Planning, Deploying, and Testing an Exchange 2010 Site-Resilient Solution sized for a Medium Organization (Part 13)

If you would like to read the other parts of this article series please go to:


In part 12 of this multi-part article, we simulated a datacenter failure and went through the steps necessary to activate the Exchange services in the failover datacenter. We also looked at how a datacenter switchover affects our Exchange clients.

In this part 13 which is the last article in this multi-part article (yes you heard that right) I’ll take you through the steps necessary to switch Exchange services back to the primary datacenter.

Restoring Exchange Services in the Primary Datacenter

Currently Exchange services have been restored in the failover datacenter and both the CIO and the end users are all happy. However, unless the primary datacenter have been damaged in such a degree that it can be restored, most enterprises usually want to switch Exchange services back to the primary datacenter relatively shortly after it is fixed.

In the following article, I’ll take you through the steps necessary to restore Exchange services in the primary datacenter. Let begin with starting the virtual machines.

Figure 1: Starting the virtual machines

Back in part 12 of this multi-part article where we stopped the Mailbox servers using the Stop-DatabaseAvailabilityGroup cmdlet followed by running the “Restore-DatabaseAvailabilityGroup” cmdlet, which evicted server “EX01” and “EX03” from the DAG. To bring the Mailbox servers into a started state and incorporate them back into the DAG, we’ll use the “Start-DatabaseAvailabilityGroup” cmdlet. More specifically the following command:

Start-DatabaseAvailabilityGroup DAG01 –ActiveDirectorySite Datacenter-1

Figure 2: Running the Start-DatabaseAvailabilityGroup cmdlet in order to put the Mailbox servers in a started state

If DAC mode isn’t enabled for the DAG, you must use the Add-DatabaseAvailabilityGroup cmdlet to add the Mailbox servers back to the DAG.

After the Start-DatabaseAvailbilityGroup cmdlet has been run, you can verify whether the Mailbox servers (DAG member servers) have been put into a started state using the following cmdlet:

Get-DatabaseAvailabilityGroup | fl Name,StartedMailboxServers,StoppedMailboxServers

Figure 3: Listing which Mailbox servers are in a started state

In order to make sure the proper quorum model (because we have an equal number of DAG member servers, it should be node and file share majority) is used for the DAG, we will run the following command:

Set-DatabaseAvailbilityGroup DAG01

Figure 4: Updating witness share settings and querum model mode

The DAG still points to FS02 (which is the alternate witness server) as the witness server. To change this we will use this command:

Set-DatabaseAvailabilityGroup DAG01 –WitnessServer EX01 –WitnessDirectory “C:\DAG01”

Figure 5: Setting FS01 as the Witness Server

To verify the changes, use this command:

Get-DatabaseAvailabilityGroup DAG01 | fl Name,WitnessServer,AlternateWitnessServer

Figure 6: Verifying the witness server property points to the witness server in the primary datacenter

Now let’s move on and have the cluster core resources moved back to the primary datacenter. We can do this using the following command:

Cluster group “Cluster Group” /MoveTo:EX01

Figure 7: Moving the Cluster Core Resources to EX01 in the Primary Datacenter

Although it isn’t a required step to move the cluster core resources, personally I like to have them online on a server in the primary datacenter since the server that owns the cluster core resources from the DAG perspective is also the primary active manager (PAM).

Figure 8: Cluster Core Resources online in the primary datacenter
To verify that EX01 is now the PAM, you can use the following command:

Get-DatabaseAvailabilityGroup -Identity DAG01 -Status | fl Name,PrimaryActiveManager

Figure 9: DAG Member server holding the PAM role
You can also check this by looking at which server is the host server in the Failover Cluster console.

Figure 10: Current Host Server in the Failover Cluster console
Or with Cluster.exe:

Cluster /cluster:DAG01 /quorum

Figure 11: Verifying used witness server using Cluster.exe

Now “EX02” and “EX04” in the failover datacenter will start to ship log files to “EX01” and “EX03”. Depending on things such as the length of the outage in the primary datacenter as well as other conditions, Exchange may fail getting “EX01” and “EX03” in sync. If this is the case you need to perform a manual reseed of the mailbox databases.

We have now reached the stage where the database copies on the primary datacenter should be in a healthy state (Figure 12).

Figure 12: Database Copies in the Primary Datacenter in a healthy state

If one or more of the database copies on the servers in the primary datacenter are not in a healthy state, these must be updated before you can activate database copies on these servers.

Failing Exchange Services Back to the Primary Datacenter

We have now prepared for the failback to the primary datacenter. The next steps will result in an outage so they should be performed during a scheduled service window.

First step is to dismount all the databases so that you can control when the end-users should be able to access their mailbox. To do so use the following command:

Get-MailboxDatabase | Dismount-Database

Figure 13: Dismounting all Mailbox Databases

Now verify all databases are in a dismounted state. You can of course do this using the Exchange Management Console.

Figure 14: Databases Dismounted in EMC

Or if you prefer using PowerShell, use the following command:

Get-MailboxDatabase –Status | fl Name,Mounted

Figure 15: Databases Dismounted in EMS

Now let’s get the load balancer in the primary site up and running again (remember we simulated a failure of this load balancer by disabling all real servers) (Figure 16).

Figure 16: Enabling the real servers on the load balancer

With the load balancer up and running, we can update internal as well as external DNS, so the Exchange specific FQDNs once again point to the load balancer in the primary datacenter.

As you probably recall, we change the following internal records:

  • (endpoint used by Exchange clients and services)
  • (used for inbound SMTP)
  • (FQDN configured on the CAS array object in primary datacenter)

In this example, we must set them to point to instead of To update the internal DNS records in Active Directory, launch the DNS Manager console and update each of the above listed records so they point to

Figure 17: Updating internal DNS records

The following external DNS records are currently pointing to the firewall in front of the failover datacenter:

  • (endpoint used by Exchange clients and services)
  • (used for automatic Outlook 2007+ and Exchange ActiveSync device profile creation plus for Outlook 2007+ features that rely on the availability service)
  • (used for inbound SMTP)

Since enterprises uses different external DNS providers, I won’t go through the steps on how this is accomplished.

As I mentioned in part 12 of this multi-part article, when it comes to the external DNS records there will be a delay before other DNS providers pick up the change. The same is usually true for internal DNS. How long the delay is depends on the Active Directy topology used within the organization. For instance, if end user machines are located in another Active Directory site than the one in which the Exchange 2010 servers are located, it can take up to 180 minutes as this is the default replication interval between Active Directory sites.

As I also mentioned in part 12, you should factor in the DNS client cache delays.

While waiting for the DNS updates to occur, we can spend some of the time to activated all mailbox databases in the primary datacenter.

Remember that because we dismounted the databases, they will not be mounted automatically after activation.

To activate the databases in the primary datacenter, let’s use the RedistributeActiveDatabases script, I showed you back in part 7 and 9 of this multi-part article. This will make sure the the active databases will be redistributed across server  “EX01” and “EX03”.

Figure 18: Redistributing Active Mailbox Databases across EX01 and EX03

When the DNS updates have picked up, we can mount the mailbox databases using the following command:

Get-MailboxDatabase | Mount-Database

Figure 19: Mounting Mailbox Databases

And with this we have performed all the steps required when doing a failback to the primary datacenter and you can now verify that clients cannect as expected.

With this multi-part article ends. I hope you learned something along the way.

If you would like to read the other parts of this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top