Exchange 2000 Webstore Strategies

First, let me say that you should embrace Exchange 2000 applications <grin>. Translation: if you provide a basic service level agreement and think of yourself as an application hosting service, you will be able to provide a better system and infrastructure for Exchange 2000 applications.

Wise Exchange professionals have always encouraged companies to devote computers to Exchange applications. Sure, you can run an Exchange application and mailboxes on the same server, but what happens when the Event Service hangs (that never really happens anyway <grin again>), or you need to stop the Exchange services in order to install a new custom agent you created. The point is: you need to be able to take an application server offline and not affect someone’s ability to check their mail. Also, many larger shops do not run the Public store on all servers.

Exchange 2000 applications are much more reliable than those running on Exchange 5.5. For one thing, there is no single Event service that processes scripts. Not only that, we no longer have to break the rules in order to get our collaborative applications to work (Anybody use the Exchange 5.5. routing objects?). Because of this new added stability, a great many companies are now looking to centralize the Exchange 2000 application servers within their company. It makes sense really; the application data does not always need to be pushed out to the fringes like a mailbox. Also, centralization allows you to formally support the equipment and data in the folder trees.


Figure 1: Data Center Distribution Scenario

In the scenario shown in Figure 1, we could place a single dedicated application server in each data center. This server could take advantage of the Storage Area Network (SANs), Uninterrupted Power Supply (UPS) and spare equipment already available in the data centers as well as localized Redundant and Independent Disks arrays. (RAID). The Service Level Agreement (SLA) and replication requirements dictate where the application actually resides, but we can’t decide on that yet. Our main goal is to provide a base-level application server near each large user population.

Many companies are comfortable with Windows/Exchange 2000 Active/Active server clustering. This and other technologies could be tested in order to move to the next step of redundancy and growth. Such a natural progression would entail adding a second server to each location and configuring the new application server to serve as an active cluster with the first server. So what really determines the number of servers or databases you need?

Exchange 2000 WebStore Strategies

 

SLA Breakdown

Three factors determine how an application is configured on the server and whether a new server is required: security, risk and availability. The worksheet in Figure 2 provides a good start in identifying each variable.

Figure 2: Application worksheet

Connectivity Requirements (This determines if a MAPI store is required)

  • Does the proposed application require:
  • Outlook 97 or Outlook 98 Access?
  • Offline Access?

WAN Impact (This will determine if replication traffic and remote traffic be analyzed)

  • Are the users (mostly) centralized to a single data center? Specify.
  • Are the users (somewhat) equally divided among the data centers (Global Access)
  • What is the projected daily traffic for the application? (Better phrasing required)
  • What is the expected population for the first six months?
    • Less than 500 users
    • 500-1000 users
    • 1,000-10,000 users
    • More than 10,000 users

Application Requirements

  • Does this application require:
    • Custom Sinks or agents?
    • Integration with Chat?
    • Integration with Conferencing? (Exchange conferencing server)
    • Integration with other applications (Databases or systems)
    • Additional security above and beyond the Secure ID and Kerberos
      • Extreme Physical Security
        • Cannot be located in Data Center
        • Cannot be on same machine as other databases
        • Administered by department
        • Cannot be accessed by Internal system
      • Specialized Departmental Management tools or access
      • Additional encryption requirements
      • Access to extranet or other business networks
    • FrontPage 2000 extensions
    • Indexing
    • Persistent Searches

Schema Requirements

  • Does this application require:
    • Active Directory Schema update or expansion?
    • Web store Schema sharing?
    • Extreme protection from schema inheritance?

Recoverability

  • What is the required uptime for the application (affects the price of the app)

A. 99.999

B. High Availability with a 4 Hour recovery window

C. High Availability with a 8 Hour recovery window

D. High Availability with no guarantees

Management and Monitoring (NetIQ and Provisioning server may offer some relief)

  • Does this application have any specific monitoring requirements?

Result Classification

OK, so we have quite a few questions and some answers. I expect you may already know what to do with these answers, but just to be sure, let’s take a few minutes to discuss some of the items.

  • MAPI. Obviously, MAPI access requires the default Public Folder tree. The default Top Level Hierarchy (TLH) in Exchange 2000 allows MAPI access. Any new TLH cannot be accessed through MAPI. Now we move on to offline use, which is a particularly sore subject with many. Until further notice, MAPI will continue to provide the offline mechanism for Outlook. The Local Web Storage System has been postponed. If you re quire offline access to items in the Public Folder, you will either need to write a custom tool or use the MAPI store.
  • WAN Impact. This question is to try to determine what kind of replication may be required in order to locate data close to the users. This question also sets the stage for a more thorough WAN analysis using a protocol analyzer, such as the SMS Network Monitor tools or other third-party applications. Although I did not mention this in the questionnaire, you might want to track server load on the application as well to monitor the web services and Exchange store.
  • Custom Sinks. This question will probably land me in some hot water, but I must follow my conscience. Custom event sinks can potentially bring down a server. With Exchange 2000, you are no longer tied to the Event Service for our processing. This is a good thing because it allows you to create new event “services” that are tailored for a specific need. Such is the case for the Workflow sink. This is a bad thing because a poorly written sink or one that performs constant or uncontrollable execution, such as protocol sinks, could “steal” all of the processor time and cripple your server. I am not discouraging application development on Exchange 2000, but you need to consider these things in our testing environments and in the way we support our SLAs.
  • Connections to other Databases or Legacy systems. Consider these processes the same as sinks. If you write a custom application to keep your data in-sync with an AS/400 database or phone system, your code could potentially take down the system. Especially if this code executes with a high frequency or is automatically based on changes. Even if the code is good and the frequency is too often, a system overload could result. Again, just consider this when determining the hardware required to support the applications. If you have two separate applications that require high-availability and each runs very frequent questionable sinks, you should consider separate servers. You don’t want one server to take down the other.
  • FrontPage Extensions. FrontPage (2000) Extensions are required if you want to use Visual Interdev to build Exchange 2000 applications. There is a new version of the extensions in the works that actually acknowledges the Exchange Web Store, but the current version requires quite a bit of administrative overhead in that it is page (application) -specific and not TLH-specific. I can offer two ideas on this subject:
    • Create a specific TLH for Interdev projects and build your applications within this page (application).
    • Enable FrontPage extensions on an application-by-application basis and ignore the additional directories and files that are duplicated on the server.
  • Persistent Searches. By design, persistent searches violate security. A search under one security context could provide unauthorized access to another person using a different security ID. Moreover, a search down the wrong folder tree could provide unauthorized access or incorrect searches into another application. In instances where persistent searches are required, consider a different TLH for the application.
  • Schema. Here is where things get very interesting. One benefit of XML and Web Storage System programming is the ability to share schema definitions, methods, etc., and overall code-sharing ability. First, schemas are TLH-specific. If you create a schema folder on a TLH, applications within that folder structure can leverage the schema. You cannot jump folder hierarchies for schema references. One fine day, I will write an entire article on schema strategies, but for now we will mention that the good thing about sharing is the time saved in using existing code. The bad thing about sharing a schema with several applications is that a change to the schema could adversely affect the other applications. In supporting SLAs, remember that sharing a schema increases the risk to application(s).

Application Categorization

Your results from the SLA worksheet determine how you will categorize the application. Figure 3 shows a sample categorization


Figure 3: Sample Application Categorization

The diagram in Figure 4 allows us to categorize the application from the standpoint of availability or level of service.

Database Placement

You may not realize this, but your current Exchange environment is actually offering application hosting options for the user community. By default, access to the existing Public Folder structure is already available. If these folders are on Exchange 2000, you are an ASP! So what service level are you currently providing to your user community on these folders? Several times in our sample application categorization (Figure 3) , we used the classification D for no guarantees. If this is true for your case, you might want to ask yourself if your user community knows this? If not, you might want to let them know.

It is common for departments to take this commodity folder tree and build a dependency on the data and access to the information. Based on a basic ASP model, you should be prepared to offer them an upgrade to a higher access level. Consider the information in Figure 4.


Figure 4: Categorizing by availability or level of service

Commodity Access to Basic Public Folder Access 50MB SLA Class D

Upgrade to Exchange 2000 Additional WebStore Access 250 MB SLA Class C

Upgrade to Exchange 2000 Advanced WebStore Access 7GB MB SLA Class B

Upgrade to Exchange 2000 Dedicated WebStore 7BG+ MB SLA Class A

Class C applications share databases on the web store. This is the first level to which you charge the department. Why? Because additional drive space and SLA costs money to support. The financial aspect is only to show that there is a difference between supported and non-supported as well as the burden of managing additional data.

The Class B Application allows more advanced web store utilities and is an isolated web store in a shared storage group. Class B applications that use Event Sinks or Content Indexing introduce more risks to the other applications and should be better isolated. Moreover, Class B applications can take advantage of deep-traversal because it runs on a separate Top Level Hierarchy (TLH)

The Class A Application is a dedicated server that can be configured as the application requires. There are no preset storage group or database templates. Consider this a totally isolated application either because of extreme risk, extremely high server processing or the need for Global, Timer or Protocol sinks. Crazy security requirements may also dictate the need for a separate server.

Database Placement

In our centralized database example, the initial server in each data center will support both Class C and class B applications. By restricting the database sizes and store sizes as depicted in the illustration in Figure 5, SLA recovery specifications can be guaranteed as long as you practice due diligence on spare equipment and server backup and restore procedures. If you want to allow 100 GB databases, just make sure you can meet your SLAs.

Clearly, the most elegant recovery solution is a “snapshot” utility within a SAN box. Several manufacturers have products that “stream” data within the SAN to provide nearly instant recovery. Unless you have one of these solutions, I doubt you could recover 100 GB in 4 hours. You need to plan for server failure and database corruption. Neither happen that often, but both do happen.


Figure 5: Sample Database Placement

The scenario in Figure 5 plans for a couple of fast DLT drives in library systems to handle the backup and recovery of the servers in the Data Center. We know we could back up (?) a 7 GB database in eight hours as well as roll back any transactions we pick up along the way. This becomes our agreed upon “comfort zone.”

We also have decided that four databases are a better balance than the five databases recommendation. Once you fill four storage groups or the need for a Class A application arises, more server equipment must be allocated.

Another important thing to remember is the number of disk arrays you will need to support your applications. Each storage group should have a dedicated drive for the database as well as the transaction logs.

Summary

Ultimately, it is up to you to determine where the application fits into the overall scheme of things. The two main points I want to emphasize are:

  • Your embrace and support of applications will help ensure their success.
  • Choose your databases and storage group configurations based on SLA requirements and not based on department, geography or magic eight-ball.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top