The Art and Science of Sizing Exchange 2003 (Part 2)



If you missed the other parts in this article series please read:

 

 

 

 

The Disk Subsystem

 

Sizing the disk subsystem is the most complex and critical task, since this component is the most common cause of bottlenecks. In the next few lines I’ll try to show you how to estimate your storage requirements and how to correctly size your back-end server to handle that load.

 

But before starting the calculations, let’s take a look at some of the most common storage technologies available:

 

 

 

 

Technology

 

Description

 

DAS – Direct Attached Storage

 

It’s the most basic level of storage and the most common technology for small/medium servers. Storage devices are local to the host computer, as internal drives, or directly connected to a single server, as with RAID arrays or tape libraries.

 

The disks can be SCSI, Fibre-Channel or ATA/SATA.

 

SAN – Storage Area Network

 

It’s the technology of election for enterprise solutions. Data is kept separate from the servers, connected trough a high performance, dedicated network.

 

Fibre-Channel, a gigabit network technology, is primarily used, but SCSI or iSCSI can be used as well for connecting the devices.

 

NAS – Network Attached Storage

 

Although not supported initially, it can now be used in Exchange implementations after a policy review by Microsoft (Q839687).

 

NAS is a special purpose device, comprised of both hard disks and management software, which is dedicated to serving files over a network.

 

Table 1: Storage technologies

 

The Exchange Information Store is the core repository for private and public data. The Information Store service uses the technology Extensible Storage Design, a transactional database engine. The information is kept on .EDB and .STM files, while the transactions are registered on log files with extension .LOG. Although there are some complementary files, these are the ones which should be taken into account when planning the storage subsystem. The I/O pattern of the read and write operations is different for each file type, as shown in the next table.

 

 

 

 

Component

 

I/O pattern

 

Jet database (.edb)

 

  • Random reads and writes

     

  • size: 4KB
 

Streaming database (.stm)

 

  • Sequential reads and writes

     

  • Variable page size, but usually averages 8KB in a production environment
 

Transactional logs (.log)

 

  • 100% sequential writes during normal operations

     

  • 100% sequential reads during recovery operations

     

  • Writes vary in size from 512 bytes to the log buffer size

 

Table 2: I/O patterns of the Exchange store components

 

Once the I/O patterns of the main Exchange components are known, we should follow some best practices when placing these files, in order to improve performance and availability. The following table illustrates these best practices:

 

 

 

 

Source of Exchange I/O

 

RAID Level

 

Best Practices

 

OS Binaries

 

RAID 1 (DAS)

 

Operating System binaries should be located on a RAID 1 DAS volume, due to fault tolerance reasons.

 

Database files

 

RAID 0+1 (5)

 

All database files (.edb and .stm) should be placed within a storage group on a single volume that is dedicated to these databases. Disks that hold database files should have fast random access speeds.

 

RAID 0+1 should be used in order to provide proper performance and fault tolerance. For small Exchange implementations (<500 users) you can consider RAID 5, since it’s cheaper.

 

Transaction logs

 

RAID 1

 

Since transactions are first written to the transaction logs, transaction logs should be on a storage device that has the lowest possible write latency (<20ms).

 

For optimal recoverability, the transaction logs for each storage group should be separated onto a dedicated RAID 1 or RAID 0+1 array.

 

Set the ratio of the write-back cache of your hardware RAID controller to 100 percent write (assuming it has the necessary backup battery).

 

Content indexing files

 

RAID 0+1

 

Content indexing files should never be placed on the same disk as the page file (although that is the default location). If you can’t afford a dedicated volume for these files, they can be placed on the same volume as the databases, provided that the disk subsystem can handle the load since content indexing file is a random-access file.

 

SMTP Queue

 

RAID 0+1

 

This is mostly important for servers with an expected high-volume of SMTP messages, e.g. bridge-heads.

 

For this scenario it is recommended to put the SMTP Queue files on a dedicated RAID 0+1 volume with multiple disk spindles.

 

Avoid using volumes that perform other function, such as database, transaction logs or page file.

 

MTA Queue

 

RAID 0+1

 

The considerations made for the SMTP queue also apply to the MTA queue so use a dedicated RAID 0+1 volume whenever possible if your server handles a significant amount of SMTP and/or MTA traffic.

 

Page file

 

RAID 1 (DAS)

 

This is not a new recommendation, by now you should know that you should place the page file on a separate RAID 1 volume, for optimal performance and fault tolerance reasons.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Table 3:Best practices for optimizing disk I/O

 

 

Note
You should be aware that these best practices recommendations are, somehow, related with the dimension and complexity of the solution to be implemented. For instance, in a small business scenario with 5 users is perfectly possible to maintain an Exchange Server with all its components on the C: drive for many years, without having a single problem.

 

Still, this can be considered a risk. The point here is that we must follow some common sense if there’s not a budget available that allows us to accomplish all the best practices stated in the previous table. The very least you should try to do is to separate the database files from the transaction logs files in different spindles (not just different partitions).

 

How to Calculate I/O Requirements

 

After this quite long introduction, let’s go to the main objective of this article: how to calculate the I/O requirements for an Exchange Server.

 

The pitfall here is that most of the time people only size the storage for capacity and forget the performance factor. The keyword here is *IOPS*, as we’ll see a little further ahead.

 

The recommended methodology is the following:

 

 

  1. Performance sizing: the first step is to calculate the storage configuration to accommodate the total IOPS required by the system. Remember, size for performance before sizing for capacity.
  2. Capacity sizing: don’t be misled by just taking into account the space obtained by multiplying the number of users and the maximum mailbox size. We must consider additional space for item retention policy, offline DB operations and planned future growth.
  3. Component placing: the most important rule is of course to separate the transactional logs from the database files, placing them on separated spindles. Follow the best practices stated in Table 3.
  4. Additional tuning: the final step is to tune the storage environment to fit Exchange needs. At the storage array level you can for example maximize the write-back cache and configure the page size to be 4kB. Another thing you could do is to align the physical partitions so that disk tracks match sector tracks, but here I strongly advise to consult your hardware vendor first.

 

Capacity sizing is a relatively easy task: in order to calculate the necessary space to accommodate the databases, we have to multiply the total number of user and the maximum mailbox size, plus the necessary overhead for offline database operations, deleted items retention policy and predicted growth.

 

The challenge here, which has somehow been neglected over the years, is to calculate performance needs. If you remember part 1 of this article, I mentioned 2 important metrics to determine user profile: megacycles/mailbox and IOPS/mailbox. The last one is particularly important for determining the I/O requirements.

 

Since not all Exchange implementations have a previous background in order to extract usage patterns, there are two approaches for calculating I/O requirements:

 

Theoretical estimation of user needs

 

If there aren’t any previous Exchange implementations in order to measure usage profiles, we must estimate user needs. As we have seen in part 1 of this article, we can use some well known patterns that group users into 3 main categories: light, average and heavy.

 

 

 

 

Mailbox Profile

 

IOPS

 

Megacycles

 

Message Volume

 

Mailbox Size

 

Light

 

0,18

 

0,75

 

10 sent / 50 received

 

< 50 MB

 

Average

 

0,4

 

1,9

 

20 sent / 100 received

 

50 MB

 

Heavy

 

0,75

 

2,5

 

30 sent / 100 received

 

100 MB

Table 4: Mailbox profiles and corresponding usage patterns

 

The IOPS mentioned in this table refers to the storage volume that holds the mail databases, which is the most critical, since it represents 90 percent of I/O operations.

 

Calculation using live environment data

 

In order to obtain IOPS/mailbox from a live production environment we must take some measures using Performance Monitor. The measures should be obtained during peak hours and for a minimum period of 2 hours. The relevant counters are:

 

 

  • Logical Disk\Disk Transfers/sec\Instance = Drive letter that houses the Exchange Store database. (Add all drive letters that contain Exchange Database files).

     

  • MSExchangeIS\Active User Count

 

The IOPS/Mailbox value is obtained by dividing the average of the 2 counters.

 

Performance Sizing

 

After determining I/O requirements it’s time to start thinking of how to obtain them. The factors that influence the IOPS of a storage system are:

 

 

  • Disk rotation speed (10.000rpm, 15.000rpm): Usually, 15.000rpm disks can provide about 180 IOPS, whilst the 10.000rpm ones provide around 130 IOPS. The exact values can be obtained by consulting the hardware vendor.

     

  • RAID level: RAID fault tolerance has a direct cost on performance, because the information can be written more than once.

     

  • Number of disks (spindles): many small disks will provide a better performance than a few bigger disks.

 

To calculate RAID performance penalty there are some assumptions we must make. One of those assumptions is that the read/write ratio on the database volume is 3:1, i.e., 3 reads for every write operation.

 

The following table presents the penalty factor for the different RAID levels:

 

 

 

 

RAID Level

 

I/O penalty

 

Penalty factor for 3 reads + 1 write(read/write ratio 3:1)

 

RAID 0

 

No additional penalty. Each read and each write corresponds to a single I/O operation.

 

4 ÷ 4 = 1

 

RAID 1, RAID 0+1

 

Each write requires 2 disk I/O operations. Read is a single operation.

 

4 ÷ (3+2) = 0.80

 

RAID 5

 

Each write requires 4 disk I/O operations. Read is a single operation.

 

4 ÷ (3+4) = 0.57

Table 5: RAID penalty factors

 

Now that we know the variables that influence I/O performance (rotation speed, RAID level and number of disks), all we need is a formula that correlates all of them:

 

 

(IOPS/mailbox) × (number of mailboxes) = (IOPS/disk) × (RAID penalty factor) × number of disks)

 

Example:
Imagine the following scenario: we need to calculate the necessary storage for a new Exchange infra-structure which will serve 1000 users with average profile (0.4 IOPS/user, 50MB mailbox size).

 

Let’s assume that 15000rpm disks will be used with a RAID 0+1 configuration. So, in order to determine the number of disks:

 

(number of disks) = (IOPS/mailbox × number of mailboxes) ÷ (IOPS/disk × RAID penalty factor) = (0.4 × 1000) ÷ (180 × 0.80) = 2.78

 

Since we are using RAID 0+1, we must round up to the nearest even number, which is 4.

 

Remember that the previous calculations are valid for back-ends. If you have a large bridge-head and want to correctly size it, the rule of thumb is that a single spindle can handle about 30 small messages per second.

 

Capacity Sizing

 

Once performance sizing is finished, the next step is capacity sizing. I won’t cover every Exchange component, since there are many variables in stake. For instance, the SMTP Queue requirement will depend largely on the volume of mail traffic.

 

For a back-end server, the most critical volume is the one that holds mail databases, since it will be responsible for 90 percent of every I/O operations. There are some rules that should be followed:

 

 

  • Keep different storage groups in different volumes
  • Group databases per storage group
  • Estimate the overhead, based on the deleted item retention policy, predicted growth and offline database operations.

 

So, for each storage group, the space needed is equal to the sum of all databases. Assuming just private stores, the size needed for each database equals the number of users for that database times the maximum mailbox size:

 

 

DB size = number of users × mailbox size

 

In order to accommodate the overhead, I usually double the size previously calculated, but you should consider whether this rule fits your needs.

 

For the transactional logs volume, the space needed will depend mainly on 2 factors:

 

 

  • Message volume
  • Backup policy

 

Keep in mind that if you’re not using circular logging (and you shouldn’t!), the number of logs will keep growing until a backup is done.

 

My personal rule is to configure a volume that is about 10 percent of the database volume.

 

 

Example:
For the same scenario described previously, in order to determine capacity, we multiply the number of users and the mailbox size:

 

Capacity = 1000 × 50MB = 50 GB

 

I usually recommend doubling this value, in order to accommodate future growth, deleted item retention and database operations, so let’s assume 100GB of needed space. For a RAID 0+1 array, that space can be achieved with 4 SCSI disks of 72GB each.

 

Summary

 

No matter the size of your organization, no matter the technology you use, storage is the most critical element in a successful Exchange implementation. This is particularly true for back-end servers and bridge-heads. In this second part we saw how to correctly size the storage of a back-end server, the core of your messaging infra-structure. In the next part I’ll cover the validation process and I will also discuss some tools that can assist you in the sizing process.

 

If you missed the other parts in this article series please read:

 

 

 

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top