A Practical Look at Migrating From Exchange 2003 to Exchange 2007 (Part 3)

If you would like to read the other parts in this article series please go to:

 

 

 

Introduction

 

This is the third part of an article series covering a project to transition from an existing Exchange 2003 environment to a new Exchange 2007 environment. If you have read parts one and two already, you will know that so far I have covered the installation of the first Exchange 2007 servers, combined Hub Transport and Client Access Servers, into the Exchange 2003 environment. I have also covered the initial preparation steps of the Clustered Continuous Replication (CCR) environment. Here in part three, I will continue with the installation of this CCR environment.

 

Cluster Configuration

 

Now that the cluster had been created I was able to configure it for production use. The first element to configure was the cluster network usage. To do this, I drilled down the Cluster Administrator hierarchy to the Networks object found under the Cluster Configuration object. By bringing up the properties of the Private network, I ensured that this network was set to Internal cluster communications only (private network) as you can see from Figure 6. The public network was configured as All communications (mixed network).

 


Figure 6: Configuring the Private Network

 

Next it was important to ensure that the networks were in the correct order within Cluster Administrator. To do this I right-clicked the cluster name, CLUSTER1, right at the top of the hierarchy in Cluster Administrator and chose Properties from the context menu. This presented me with the cluster properties window and from there I navigated to the Network Priority tab where I ensured that the private network was at the top of the list as you can see in Figure 7.

 


Figure 7: Configuring the Cluster Network Priority

 

Microsoft also recommends that you configure various settings that control the tolerance towards missed cluster heartbeats. To do this I used the cluster.exe command-line interface by running the following two commands:

 

cluster.exe CLUSTER1 /priv HeartBeatLostInterfaceTicks=10:DWORD

 

cluster.exe CLUSTER1 /priv HeartBeatLostNodeTicks=10:DWORD

 

After changing these settings, the cluster service was stopped and restarted on each node to ensure the changes took effect. Of course, it was important to move the cluster resources between nodes before stopping the service to ensure that the node being stopped and restarted was the passive node at the time.

 

File Share Witness Configuration

 

At this stage I was not finished with the configuration on the Hub Transport servers as these servers were to be the location of the File Share Witness. The decision to use the Hub Transport servers as the location of the File Share Witness was in line with Microsoft recommendations. Of course you are free to use any server that is capable of having a file share located on it but as the Hub Transport servers obviously fall under the control of the Exchange administrators in most organizations these make the best choice.

 

In normal circumstances there is only a single file share witness required and therefore I chose the server HUBCAS1 for this role. However, HUBCAS2 was also pre-provisioned with a file share witness to cater for the potential loss of HUBCAS1. Here’s the process used to create the file share witness on HUBCAS1:

 

 

  1. On the root of drive D: I created a folder called MNS_FSW_DIR_EX2007. This folder name followed the Microsoft recommendation of using MNS_FSW_DIR_ followed by the CMS name. In this case MNS stands for Majority Node Set, FSW stands for File Share Witness and the DIR shows you that this is a directory or folder name. You can create this folder anywhere you like but as I’ve indicated I chose the root of D: for this particular installation. In the future, I think that I will be creating these folders somewhere under the main Exchange installation folder instead, so that they are part of the Exchange installation structure.
  2. Next the folder that was created in step 1 was shared using a shared name of MNS_FSW_EX2007. This share name format is the Microsoft recommended format for the share name. Also, the cluster service account was given full access to this newly created share. I did all this from a single command:
    net share mns_fsw_ex2007 = d:\mns_fsw_dir_ex2007 /grant:neilhobson\excluster,full
  3. Additional share permissions were then given to the built-in Administrators and cluster service account via the following command:
    cacls d:\mns_fsw_dir_ex2007 /g builtin\administrators:f neilhobson\excluster:f
  4. Finally the cluster Majority Node Set resource was configured by running the cluster.exe command-line utility as follows:
    Cluster cluster1 res “Majority Node Set” /priv MNSFileShare=\\HUBCAS1\MNS_FSW_EX2007

 

In step 4, notice that the UNC path includes the HUBCAS1 server name. Some time ago Microsoft changed its recommendations on recovery around loss of the server containing the file share witness. The old method involved the use of DNS CNAME records whilst the newer method uses the cluster ‘forcequorum’ method. The reasoning behind this is detailed on the Exchange team blog here and I recommend that you read this article.

 

CMS Installation

 

Now that the cluster was installed and configured correctly, along with the file share witness feature, the CMS itself was then created by installing the Active Clustered Mailbox Role of Exchange 2007 SP1 onto the cluster node called NODE1. This was achieved by running the Exchange 2007 setup.exe program as usual and following the various installation wizard screens. It was important to ensure that a custom installation of Exchange 2007 was performed as the typical installation does not allow for the installation of a CMS. At the Server Role screen of the installation wizard, the Active Clustered Mailbox Role option was selected as you can see from Figure 8.

 


Figure 8: Installing The Active Clustered Mailbox Role

 

At the next screen, the Cluster Settings screen, the Cluster type option was set to Cluster Continuous Replication. The CMS name was entered as EX2007 which, if you remember, is the name of the Exchange server that Outlook clients will connect to. A suitable IP address was chosen for the CMS, not forgetting that this is a different IP address than the actual cluster IP address that was chosen earlier.

 

CCR and Public Folders

 

I would like to point out here that during the installation of the CMS I did elect to create a public folder database on the CCR environment. Although a CCR environment can host public folders, there are some caveats that you need to understand. These caveats are documented in the article Planning for Cluster Continuous Replication under the section titled Cluster Continuous Replication and Public Folder Databases and I recommend that you read this section carefully. The reason for the approach taken within this design for public folder databases was simply because the requirement was for the public folder data to be highly available in the same way as the mailbox data.

 

However, public folder databases have their own data replication mechanism and in some ways this replication model and the replication model within CCR are incompatible. Since I had installed the CCR environment into an existing Exchange 2003 environment, there now existed two public folder databases which meant that public folder replication was enabled in addition to the CCR replication model. Microsoft clearly states that in such a situation, if there is an unscheduled outage in the CCR environment, the public folder database will not mount on the new active node until it can contact the original active node. Note the reference to an unscheduled outage. In other words, during normal operations there are no problems.

 

With this in mind, deciding to implement a public folder database within a CCR environment that is coexisting with other servers that contain a public folder database becomes a balancing act between the risk and convenience of such a configuration. If you replicate and re-home your public folder data onto Exchange 2007 and remove the public folder databases from Exchange 2003, the problematic configuration disappears anyway. It is an interesting design issue which requires due thought. If the risk proves too great for you, implement a dedicated public folder server running Exchange 2007.

 

CMS Installation Completion

 

Once the CMS had been installed, the node was rebooted in accordance with the directive issued by the Exchange 2007 setup program. Once the active node had been rebooted and was fully started up, the installation of the passive node, in this case NODE2, was commenced. This process is much easier than the installation of the active node since the only real decision to be made is the choice of the Passive Clustered Mailbox Role which is the other option check box that you can see immediately below the Active Clustered Mailbox Role in Figure 8. Once again, the setup program advised that this server should be restarted before placing it into production so that’s what I did.

 

After NODE2 had fully restarted, I set about applying the same Update Rollup to both cluster nodes mainly because I had neglected to install the update rollup at the time of the actual server installation! That’s not necessarily a bad thing as new update rollups will be released in the future and therefore understanding how to apply them to a production environment is going to be a requirement. The process for doing this is fairly simple and here’s what I did. I made sure that all resources were moved to the cluster node that I was not updating, then, I applied the update to this node, the passive node. Once this had completed, I moved the resources to the cluster node that I had just updated and then applied the update to the node that was now passive. Do not forget to use the Move-ClusteredMailboxServer cmdlet to move the CMS between cluster nodes. In my case a typical cmdlet that I used was:

 

Move-ClusteredMailboxServer EX2007 –TargetMachine NODE2 –MoveReason “Apply Update Rollup 3”

 

Summary

 

That’s it for part three of this article, in which we now have a working CCR environment alongside our combined Hub Transport and Client Access Servers that are coexisting with Exchange 2003. It is important to spend time configuring the cluster correctly before installation of the Exchange 2007 mailbox role onto the cluster nodes. In part four of this article we are going to focus on the installation of the Edge Transport server role.

 

If you would like to read the other parts in this article series please go to:

 

 

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top