You can’t call yourself a true Exchange Administrator until you have run the JetStress Tool at least once. This nice piece of software addresses one of the most critical areas of Exchange server, the storage subsystem. You can use it to verify the performance and stability of the disk subsystem prior to putting an Exchange server into a production environment or as a troubleshooting tool when in the presence of a bottleneck.
Jetstress is a tool that helps verify the performance and stability of the disk subsystem by simulating some heavy loads to the disks, as if the server was being used by a large number of users. You then use System Monitor, Event Viewer and the Eseutil tool together with Jetstress to verify that your disk subsystem meets the established performance criteria.
Jetstress enables you to do three types of tests:
The best way to verify the integrity and performance of your disk subsystem is to run the first two tests mentioned above.
In the latest download package for Jetstress which I used (6.5.7795.0) there are two separate applications:
Both versions can be used to successfully test the performance of an Exchange disk subsystem, but using the command-line requires expertise in specifying the parameters and analyzing performance results. Jetstress 2004 and its graphical user interface reduces the complexity of configuring the test. Additionally, it facilitates the analysis of results by producing a performance analysis report. There are also some unique features available in the graphical tool. For more information please read the manual that comes with the tool.
Although it is recommended that you run the tests in a non-production environment, in the real world sometimes it’s not possible to have a lab that mirrors the actual systems. So, my advice here is that you use the production server just before going live (even before installing Exchange), but remember to format the used storage volumes at the end of the tests.
Preferably, Jetstress testing should be performed before you install Exchange on the server. There are some well-known risks associated with running Jetstress on a machine with Exchange installed. The first is that Jetstress could potentially delete some existing logfiles if it's configured to use the same log drives that Exchange is using. The second is that if you use a version of Jet (ESE.DLL and ESEPERF.*) different than the version installed with Exchange, the registration of the Jet database counters in the Jetstress install directory will break the database counters for Exchange after Jetstress is removed.
To install Jetstress follow these steps:
If you forget to copy these files, you’ll get the following warning when you try to run the tool:
Figure 1: Validating required files
Figure 2: Validating performance counters
Before running the test, there are some settings you must first configure and that will impact your system’s performance:
The way that each of these factors influences performance is not the scope for this article, but if you want to know more about this subject, you’ll find, at the end of this article, some links to additional reading with detailed information about storage performance on an Exchange system.
So, let’s start configuring a Jetstress performance test:
Figure 3: Storage Info
Figure 4: Test Run Info
Figure 5: Database Info
Figure 6: Database creation process
After the test is completed, the performance data is analyzed and reported in a summary report. Results will be saved to Performance_(DateTime).html or Stress_(DateTime).html file. The status pane of Jetstress 2004 will provide a link to the summary report. All the performance counters collected will be gathered in a counter log file named Performance_(DateTime).blg that you can use for some more advanced analysis.
Consider the following guidelines when examining the data collected.
Performance Counter Instance
Guidelines for Performance Test
Guidelines for Stress Test
Database Avg. Disk sec/Read
The average value should be less than 20 ms (.020), and the maximum value should be less than 50 ms.
The maximum value should be less than 100 ms.
Database Avg. Disk sec/Write
This counter is not evaluated to determine whether a test passed or failed, but in an environment where storage replication is not being used, the average value should be less than 20 ms (.020).
Log Avg. Disk sec/Read
The average value should be less than 20 ms, and the maximum value should be less than 50 ms.
The maximum value should be less than 50 ms.
Log Ave. Disk sec/Write
Log disk writes are sequential, so average write latencies should be less than 10 ms, with a maximum of no more than 50 ms.
The maximum value should be no more than 100 ms.
Database Disk Reads/Sec
Database Disk Writes/Sec
Log Disk Writes/Sec
The sum of the averages for these values gives you the total disk transfer I/O. The ratio between read and write should be approximately 3:2.
Log Avg. Disk Bytes/write
This value should be between 6 to 8 K.
Average should be less than 80 percent and the maximum should be less than 90 percent.
Minimum should be more than 50 MB.
Free System Page Table Entries
Minimum should be more than 5000.
Average should be less than 100, and the maximum should be less than 1000.
Pool Nonpaged Bytes
Maximum should be less than 75 MB.
Pool Paged Bytes
Maximum should be less than 180 MB.
Database Page Fault Stalls/sec.
Should never go above 0.
Table 1: Guidelines for examining Jetstress 2004 analysis reports
If I had to choose just one performance counter to determine the performance of an Exchange system, that would be Database Avg. Disk sec/Write. But, if you noticed in the previous table, from version 6.5.7720 or later of JetstressUI, the database disk write latency criteria has been removed. Why? Because Jetstress now supports complex disaster recovery scenarios, where data replication technologies are used (sometimes based on geographically dispersed clusters). Although Jetstress doesn’t take this counter into account, if you’re not testing an infrastructure with data replication, my advice is that you still use it but make sure it stays below 20ms for 95% of the time.
The tool will produce a nice HTML report, the Jetstress Test Result, a file named Performance_(DateTime).html. If you open it you’ll see some tables with performance analysis, like the following ones:
Total test database size
Production data size
Total number of databases
48.72 GB (based on the attached database)
6 (1 storage(s) * 6 database(s))
250.00 (500 mailboxes of 0.50 IOPS)
Table 2: Planned disk subsystem profile
Avg. Disk sec/Read
Avg. Disk sec/Write
Avg. Disk Bytes/Write
Table 3: Disk subsystem performance summary
Alternatively you can build your own performance graphics, using the counter log file. See Figure 7 for an example of some disk performance counters.
Figure 7: Disk performance counters
Jetstress is really a neat tool. Making sure your storage is correctly sized will avoid future headaches due to performance bottlenecks. In this article I gave some basic guidelines on how to operate the graphical user interface of Jetstress. If you are one of those command-line addicted guys, give it a try instead of the command-line version. Also don’t forget to carefully read the manual (JetStress.doc) that comes with the tool where you’ll find much more in-depth technical stuff.
JetStress Download Package
Optimizing Storage for Exchange Server 2003
Exchange Server 2003 Performance and Scalability
Planning an Exchange Server 2003 Messaging System
Exchange Server 2003 Deployment Guide
Exchange Server 2003 Administration Guide
Troubleshooting Exchange Server 2003 Performance
Blog about Exchange Pre-Deployment Testing and Sizing
Qumulo is an up-and-coming data management solution focusing on managing files in a hybrid setup.…
Is patch management for the Windows PCs at your business driving you crazy? Maybe there's…
Two of the main factors that affect the total cost of an organization’s Microsoft 365…
Samsung rolled out the all-new Galaxy Z Fold 2, Note 20, Note 20 Ultra handsets…
SAN and NAS provide dedicated storage for a group of users using completely different approaches…
In many companies, Generation 1 virtual machines have been superseded by Gen 2 VMs. But…