• Insights

Bug Alert: Microsoft Exchange Server 2013, 2016, and Exchange Online

Dominick Ciacciarelli

4 min read

All Insights

Microsoft has issued KB3161916, detailing a bug affecting migrations from legacy public folders to modern public folders on Exchange 2013, Exchange 2016, or Exchange Online.  The issue is serious enough that, if you meet the criteria, you should stop any public folder migrations until a fix is released.

The normal procedure to migrate public folders involves connecting the target environment to a single public folder database (PFDB01 for this example) in the source environment. The public folder hierarchy on that database provides a “map” of content for all public folder data. So if data exists on other public folder databases (PFDB02 for this example), the hierarchy provides the details of where that data should be pulled from. After all data is initially synced, a cutover is scheduled, during which all users are locked out of public folders. After this lockout, all changes to data (additional content, deleted content, modified content, permissions changes, etc.) that have been incurred since the end of the initial sync are synced to the target environment. After this final sync, user access to public folders is unlocked and all users will access content from the new, modern public folders in the target environment.

The bug comes into play when the source environment has multiple public folder databases and is exacerbated when there are replicas that do not exist on all databases. For our example we can say that PFDB02 has a public folder called “Human Resources” and that it is not replicated to PFDB01, meaning that PFDB01 does not have any of the data in the Human Resources folder. The hierarchy on PFDB01 has a map to where the data is, so when a user (or, during migration, the target Exchange servers) queries PFDB01, they are given a referral to where that data is located.  The issue is that while a referral is issued on the initial sync for data that exists on a public folder other than the one used to sync to the target systems (in our example PFDB01), during incremental and final sync, data is not copied to the target. This means that upon cutover, the Human Resources folder from our example will have only the content that was copied during the initial sync. Any changes made to the folder after that will NOT be copied to the target environment. Additionally, for any folders that are replicated to PFDB01, any data not yet synced to that folder will also be lost.

This is a major issue for any organizations that have multiple public folder servers and do not have a “Full Mesh” topology in which all public folder databases contain replicas of all public folder servers.

To understand whether your environment is affected please see the scenarios below:

My organization has a single public folder database.

In this scenario you would be unaffected by the bug since there are no referrals for data on other servers.

My organization has multiple public folder databases, and all public folders are replicated to all databases (Full mesh topology).

In this scenario, all data is pulled from the designated server that the target environment has been set to sync with. While all data that exists on this server would be copied to the target during incremental and final sync, any data that has not made it there yet (due to any delayed replication on the legacy server) will not be synced.  In this situation it is advisable to wait until Microsoft issues a resolution (reportedly in the next CU for each platform). In this scenario, data loss would be dependent on the health of your public folder replication. If there are other factors forcing the move to modern public folders and waiting for the update is not feasible, the only way you can guarantee against data loss is to remove all replicas from all public folder databases except for the one that is the source for the migration.

My organization has multiple public folder databases and replicas may or may not exist on any given database.

If you are in this scenario, you stand to lose the most data. Data on public folder databases that does not have replicas on the database being used to sync with modern public folders will not sync after the initial synchronization job is complete. As in the above scenario, organizations should either wait for the next CU for your specific target platform, or, if that is impossible, restructure public folders such that all replicas exist only on the database that is the source for the migration.

I have already cut over to modern public folders… How much data did I lose, and how do I get it back?

It would be difficult to quantify the amount of data that an organization has lost. The amount of data would increase as the time difference between the completion of the initial sync job and the job finalization increases.  So if you waited two weeks after the initial sync job to finalize the migration, you would essentially be missing two weeks of all the data in all of folders not replicated to the database that is the source of the migration.  This amount of data would obviously vary based on the amount of activity in those folders.

The worst news of all is that there is no easy way to get back the data that you have lost. The only options is to restore the data in a completely isolated lab environment, either from a backed up database or a public folder database that has not yet been decommissioned, or to leverage a third-party tool that would allow you to mine data from a database in a brick level fashion.

It is unknown if the fix that will come in the next CU for each respective platform will require a restart of the initial sync job, but it is likely.

The timing of this bug report is particularly odd.  It is hard to believe that this issue has gone unnoticed over thousands of migrations during the 3.5 + year life span of Exchange 2013 and its 12 cumulative updates, not to mention the thousands of migrations to Exchange Online.

This article will be updated if more relevant details are released, or when a fix has been issued.