Product Review: QUADROtech ArchiveShuttle
Product: QUADROtech ArchiveShuttle
Product Homepage: click here
QUADROtech ArchiveShuttle is a versatile tool aimed at environments that need to move content out of an existing archive product to somewhere else, like an Exchange Server or Office 365 or between repositories.
In this review we will take a look at ArchiveShuttle and see how well it works in a typical requirement for Exchange administrators – migrating content from an archive system into Exchange 2013 or Exchange Online, replacing stubs/shortcuts as it does so.
The exploding requirement for Archive Migration
Modern Exchange, whether in the cloud or on-premises is now seen by many organizations as capable of serving very large mailbox and meeting complex archiving or compliance requirements. This removes the need for traditional archiving and journaling solutions, often resulting in a requirement to move content from legacy archive systems into Exchange.
In recent years the market has expanded and QUADROtech sit alongside a number of competing products vying for the expanding market of customers moving to Office 365, many of whom have held back waiting for the early adopters to migrate their vault data into the cloud.
Like many archive migration vendors, QUADROtech focus on a subset of the archive products on the market, based on customer requirements. These include:
- Symantec Enterprise Vault (Source and Target)
- HP Autonomy and Zantac (Source)
- Sherpa Archive Attender (Source)
- Exchange Server (Target)
- Office 365 and Exchange Online (Target)
- Mimecast (Target)
- Proof point (Target)
- Global Relay (Target)
ArchiveShuttle is a module system, installed on a number of systems communicating with a central web service for configuration data.
In a typical deployment either their cloud service, or if required, on-premises web service is used for administration and for modules to communicate with.
This cloud or on premises web service acts as the command and control for the installation, and stores configuration data and co-ordinates work flow.
Modules are then installed on each source archive system to ensure that the extraction is performed at-source.
Data is extracted to a staging area, which could be a file share on-premises or in Azure. In a scenario where data is being pushed into Exchange or Office 365, modules for “ingestion” of data into the target system are installed onto one or mode bridge-head servers that pull data from the staging area and push it into the target system.
To access the source systems ArchiveShuttle uses optimized modules, typically using a fully licenced API to get best access to data and support in case of issues accessing data.
When migrating to Exchange and Office 365, ArchiveShuttle is able to use multiple protocols to migrate data. As typical with other solutions both Exchange Web Services and MAPI can be used.
Another protocol that QUADROtech call Advanced Ingestion Protocol is used as the default for all migrations to Exchange 2010 Service Pack 1 and above and is recommended. QUADROtech don’t publish details as to what AIP is in detail but have examples where it provides a massive increase in speed versus EWS – for example 150 items per second versus 24 items per second.
This in particular was a surprise as experience with some other products has shown that to provide adequate performance when migrating to Office 365 they recommend a vast number of virtual machines pushing mail via EWS. QUADROtech claim that for their largest migrations, they only require two machines pushing data. That’s quite some claim and QuadroTech were happy for me to confirm this with a customer.
Setup and configuration
Most products of this nature require some thought and experience to set up and ArchiveShuttle is no different and QUADROtech provide assistance with set up and initial testing.
To understand how this works, QUADROtech took me through the process remotely on a lab environment comprising of Exchange 2013 and Symantec Enterprise Vault. The approach was one where I performed the installation and configuration with advice from QUADROtech watching on what options to select and why. This was quite effective as by the end of the process I felt confident in using the product going forward, and QUADROtech had a chance to verify that the product was set up and functioning correctly in the environment.
After the core web service is in place, each module is set up individually using a MSI installer. This allows selection of the modules to install:
Figure 1: Installation of a module
In addition to installing the required services, the module installer collects required information such as the service account to use and the cloud service or web server to connect to. This will allow us to perform configuration using the central admin console later on:
Figure 2: Connecting the module to the web service
After installation of the required modules, logging into the web management console allows us to view the AS (ArchiveShuttle) modules installed. In the example below we have installed the AS Admin, AD Collector and EV modules for collection, export, provisioning and post-provisioning on our Enterprise Vault server. On our Exchange ingestion server, which is the VM that acts as the “bridge” for pushing data into Exchange, we’ve install the Admin and Exchange Import Modules:
Figure 3: Listing connected modules in the web interface
The process from herein is to configure connections to directory services, configure our staging area for data and finally configure our source and target data stores.
For each directory it is a case of selecting the environment and either adding the service by name or enabling it from the list. In the example below we select Active Directory and then after selecting the local AD environment from the list, we choose Enable. We’ll perform a similar tasks to connect to Enterprise Vault.
Figure 4: Connecting to AD
After configuring Directory Services we must configure the databases and file share. Potential source and target databases are detected automatically, therefore we need to map the correct installed modules to the EV and Exchange databases.
We’ll also configure the File Share that will be used as the staging area. Service accounts on Enterprise Vault and the Exchange Ingestion servers will read and write data from this share:
Figure 5: Configuring the staging area
Finally we will check and configure the overall settings for each environment. In the example below we are selecting the ingestion provider priority, to ensure that it uses the new Advanced Ingestion Protocol first before failing over to something less advanced, and selecting the version of Microsoft Exchange:
Figure 6: Configuring options for Exchange access
After completing configuration we’re able to start configuring and performing migrations of data.
There’s a number of approaches that can be used. The two main scenarios I see from customers moving to the cloud are:
- Re-hydrate all shortcuts and data back into the Exchange Mailbox, and
- Re-hydrate all shortcuts back into the Exchange Mailbox and migrate all expired or deleted shortcuts into the Exchange online archive.
Both scenarios (as well as others) are supported by ArchiveShuttle.
Before we begin we’ll have a look at the main “dashboard”. This provides us with an overview of how the migration is going.
The first thing we’ll notice is that we have a view of two different “Stages”. The first is Synced and the second is Switched. This is fairly in line with Exchange Mailbox moves where an online move allows the content to be suspended when the migration is ready to complete and then a final sync and switch performed. We can pre-stage the data before making the final change; an approach that certainly helps when moving large numbers of mailboxes at a time in Exchange.
We’ve also got speedometers that show the extraction speed and ingestion speed. It’s clear that the separation of these two is an advantage of the “staging” area used, meaning the extraction process can be faster than the ingestion process:
Figure 7: Viewing the ArchiveShuttle dashboard
We’ll now create our first migration batch. We can do this a number of ways. You’ll see in the example below we can choose by size, name, department, country – most attributes from the source environments. We can also create an ad-hoc group inside ArchiveShuttle for the batch. We’ll do that by selecting some mailboxes to re-hydrate and then choosing Add to Group:
Figure 8: Viewing users and adding them to a group
After creating an ad-hoc group we can then use the Add Mapping option to link mailboxes together. After selecting Add Mappings we select the target, in this case Exchange Server:
Figure 9: Starting migration of a batch of users
Throughout the Add Mappings wizard we are able to select a number of options, such as:
- Whether to migrate data into the Primary Mailbox or Archive.
- For cross-forest migrations, attributes to use for matching such as LegacyExchangeDN, SID History, User Name or Email address.
- Whether to delete the source archive or not after migration.
- Data filters to use, such as
- Filtering out over-sized messages
- Filtering out messages with or without associated shortcuts – allowing two migration batches to be created; one to the primary mailbox and one to move messages with expired or missing shortcuts to move to the archive
- Whether to start the data collection and migration process immediately.
- The priority of the migration job, where 1 is the highest. The aim of this is to prioritise which batches get “slots” in the parallelised queue.
On the summary page we are shown the options selected. In our case we’ll migrate data to the same user in Exchange and begin the migration immediately for all users within the batch:
Figure 10: Viewing selected options
Migration monitoring at a high level works well using the Dashboard, and for detailed information ArchiveShuttle is bundled with a real-time log viewer. I’d imagine that this won’t be necessary to use on a day-to-day basis but was very handy to understand in real time how the migration is progressing on a per-user basis:
Figure 11: Monitoring log files
The functionality of the product is excellent. The core common scenarios for migrations between archive products are covered and QUADROtech have a good reputation for those products, particularly for Enterprise Vault consolidations or EV to EV migrations.
Key scenarios for EV/EAS archive migrations to Exchange or Office 365 are well covered and the software is more than flexible enough to cover specific corner-case scenarios too.
For example - If you just want a simple EV – to Exchange re-ingestion then this can do it; if you wanted to re-ingest all current shortcuts first, then back-fill Exchange online archives with deleted/expired shortcuts later on (or first!) then you could do so.
In addition to the migration functionality itself automation is available via a PowerShell module, but for many customers this might not be required. A workflow engine is included – a great example of how this can be used is to begin archive migrations automatically as soon as an Exchange Mailbox is migrated to Office 365. The automated workflow functionality in scenarios like that seems absolutely compelling.
The final component of enterprise archive migrations is something called “chain of custody”. If the data may at some point need to be presented as evidence, then proof that the data was not altered during the migration is critical. In addition to the log files generated, ArchiveShuttle uses the hash of the message envelope and contents to compare the source and target data. This is checked three times during the migration and can be checked post-move if additional proof is required.
Any customer purchasing ArchiveShuttle will use the included support services to set up and understand how to use the product. Although quite intuitive all products like this need to be set up correctly, and QUADROtech feel that by using support to provide the set up they can verify that the installation is implemented correctly and will be able to migrate data at an optimal speed.
My experience with support was good – though I did have access to senior employees within QUADROtech to guide me through the process. It was apparent that QUADROtech are very passionate about their product and archive migration in general, which is always positive.
How it compares to the competition
There are a range of options available depending on the use case, and the main focus of this review is on an Exchange related migration.
This presents us with a number of competing options. The most rudimentary option is exporting as PST and importing using something like the PST Capture Tool, complemented by scripts to clean up shortcuts. You’ll notice that I’ve not said that it’s the simplest option – because it isn’t. For anything but the smallest organizations a manual PST export and import is a nightmare.
Other vendors on the market use a variety of approaches, some similar and some very different to ArchiveShuttle.
At one end of the spectrum, some vendors reverse-engineer access to the source database and pull data out, then use clusters of ingestion servers to push data into Exchange or Office 365.
ArchiveShuttle takes the opposite approach and uses strong partnerships, such as Symantec STEP and Microsoft’s Gold Application Development partner program to leverage API access.
In the middle ground there are competitors that use both API access and direct access to the underlying database when the API doesn’t work. Each of these approaches has its own set of advantages and disadvantages.
QUADROtech claim that by using the correct APIs in the most efficient way they get best performance.
This isn’t something we can assess in this review, but based on what we have seen ArchiveShuttle compares favourably with similar resources available.
In a growing market those looking to move off legacy archive platforms into Exchange or Office 365 need to do their homework and allocate sufficient time and budget for a quality archive migration product. When looking at available options, ArchiveShuttle should be on your list.
MSExchange.org Rating 4.8/5
Learn more about QUADROtech ArchiveShuttle.