Debugging SQL Express Client Sync Provider

How to finally get the SQL Express Client Sync Provider to work correctly? It’s been almost a year since it was released, and still it has documented bugs. One was detected by Microsoft more than a month after release and documented on the forum, but the fix was never included in the released version. We could analyze this kind of shameless negligence in the context of Microsoft's overall quality policies, but it’s a broad (and also well documented) topic, so we’ll leave it at that. It wouldn’t be such a problem if there were no people interested in using it, but there are, very much so. So, what else is there to do than to try to fix what we can ourselves… You can find the source for the class here. To use it, you may also want to download (if you don’t already have it) the original sql express provider source which has the solution and project files which I didn’t include. (UPDATE: the original source seems to be removed from the MSDN site, and my code was updated - see the comments for this post to download the latest version). The first (and solved, albeit only on the forum) problem was that the provider was reversing the sync direction. This happens because the client provider basically simulates client behavior by internally using a server provider. In hub-and-spoke replication, the distinction between client and server is important since only the client databases keep track of synchronization anchors (that is, remember what was replicated and when). I also incorporated support for datetime anchors I proposed in the mentioned forum post, which wasn’t present in the original source. But that is not all that’s wrong with the provider: it seems that it also swaps client and server anchors, and that is a very serious blunder because it’s very hard to detect. It effectively uses client time/timestamps to detect changes on the server and vice versa. I tested it using datetime anchors, and this is the most dangerous situation because if the server clocks aren’t perfectly synchronized, data can be lost. (It might behave differently with timestamps, but it doubt it). The obvious solution for anchors is to also swap them before and after running synchronization. This can be done by modifying the ApplyChanges method like this:

foreach (SyncTableMetadata metaTable in groupMetadata.TablesMetadata)
    SyncAnchor temp = metaTable.LastReceivedAnchor;
    metaTable.LastReceivedAnchor = metaTable.LastSentAnchor;
    metaTable.LastSentAnchor = temp;

// this is the original line
SyncContext syncContext = _dbSyncProvider.ApplyChanges(groupMetadata, dataSet, syncSession); 

foreach (SyncTableMetadata metaTable in groupMetadata.TablesMetadata)
    SyncAnchor temp = metaTable.LastReceivedAnchor;
    metaTable.LastReceivedAnchor = metaTable.LastSentAnchor;
    metaTable.LastSentAnchor = temp;

This seems to correct the anchor confusion but for some reason the @sync_new_received_anchor parameter still receives an invalid value in the update/insert/delete stage, so it shouldn’t be used. The reason for this could be that both the client and server use the same sync metadata and that the server sync provider posing as client probably doesn’t think it is required to leave valid anchor values after it’s finished. I promise to post in the future some more information I gathered poking around the sync framework innards. Note that this version is by no means fully tested nor without issues, but its basic functionality seems correct. You have to be careful to use @sync_new_anchor only in queries that select changes (either that or modify the provider further to correct this behaviour: I think this can be done by storing and restoring anchors in the metadata during ApplyChanges, but I’m not sure whether this is compatible with the provider internal logic). Another minor issue I found was that the trace log reports both client and server providers as servers. If you find and/or fix another issue with the provider, please post a comment here so that we can one day have a fully functional provider.


  • Rodney

    Hi guys. Am really happy with the provided code for SQL Server - SQL Server client/server syncing - thanks for going to the trouble of publishing it.

    But I've reached a point now where I want to enable batching of the updates... Documentation seems to be a bit thin on this, so was wondering if/how it can be implemented in this scenario...? Seems to be related to a class called DbSyncContext - where is that? I can't seem to find where it fits in...?

    Any help would be really appreciated. Thanks.

    posted by Rodney sreda, 04 avgust 2010 09:45 Comment Link
  • damon

    I think i've solved 18 by building on your solution to use a dual anchor system (see 23). It's survived quite a bit of testing so far, but unfortunately i've been temporarily pulled off this project.

    posted by damon petak, 26 mart 2010 03:09 Comment Link
  • William M

    18) This is proving to be a much more complex beast than it would appear. The logical solution is to lock the table through-out the sync, but that could be rather costly during proxy-trips to a remote server, rendering the db useless during that time. (and we're talkin about multi-user client support so...) The other solution I am playing with is persisting the possible "rogue" changes between the upload|download phases to xml. And during the next sync, push those changes before applying the rest of the changes. I do not "love" the solution, but it does seem to be solid. (if anchor state is lost you have to re-init, which is the way the entire system works anyway, this just stores more data in the event that a table is updated mid-sync) So alas, no solution complete today. Tomorrow I will continue.

    20) I was correct, this will work fine so long as you use brackets around your table name. NOTE: you have to use the brackets in your anchor table as well as it is string comparison.

    21) This functions just fine.

    posted by William M petak, 26 mart 2010 02:34 Comment Link
  • damon

    I've updated the with a version that seems to solve my original problem without losing any changes. To summarize...

    The Problem:
    The last sent anchor in the original provider is correct in most cases except for the "client update bug"...
    Syncing of an insert on the server is applied at the client _after_ the last sent anchor, so the next sync would pick it
    up and try to apply it back on the server, but it doesn't because of the change context containing the server id, which
    prevents this (using a filter in the incremental inserts/updates/deletes commands).
    However if an update occurs to the inserted record on the client before the next sync, the context info on the original
    insert will have been deleted by SQL Server change tracking, so the sync will try to apply the insert back
    on the server, leading to a duplicate primary key violation. In addition the update will not be applied because it wont see it.

    To solve this we keep 2 anchors in the Anchor table. The first is the anchor set in Transaction 2 which is the one we normally
    use, and a second anchor that is the change version of any server inserts (if there are no inserts anchor 2 = anchor 1).
    The second anchor is used in the incremental inserts command as a filter which prevents the inserted record (from the client
    update bug) being applied back to the server. It is also used in the incremental updates command where we get a union of
    updates from both anchors, this ensures the update (from the client update bug) is applied.

    posted by damon petak, 26 mart 2010 00:33 Comment Link
  • William M

    18) I will have this solution implemented sometime tomorrow.
    20) I will test support for tables with spaces in them tomorrow and update you. It may simply be the need to put [table name] in brackets, but I'm not sure. [also: the SqlExpressClient does not "rewrite" the error messages, so use caution when immediately reacting to any errors. It may say: "Server database has.. " when really, its the "Client database has.. ". I do not plan to overwrite all of those internal messages.]

    21) I have not specifically tested SQL08Express w/CT SQL08Express w/CT but there should not be any problems as they both use the same Change Tracking technology. I do not maintain the so I can't give you that, but if you download the SQLExpress library zip you can make your own demo around it.

    posted by William M četvrtak, 25 mart 2010 02:01 Comment Link
  • Dave

    The sample I just DL from does not work.
    Also Can we Sync between a remote SQL 2008 Express and a local SQL 2008 Express DB?

    posted by Dave sreda, 24 mart 2010 20:32 Comment Link
  • Cornel

    Okay it would appear i missed a step, adding the tables to the anchor table in the database. Which brings me to the following: How would i be able to support tables with spaces in the table name IE:'Product Information'? I know this is a serious design flaw in the database, yet it works correctly with SqlCe Sync client. I receive the following exception: System.ArgumentException: Table name 'Product Information' is not valid. This might be caused by one or more of the following issues: unable to find a SyncAdapter for the specified SyncTable; the table name is null or empty; or the table name contains the keywords SET, FMTONLY, and OFF.

    posted by Cornel utorak, 23 mart 2010 10:50 Comment Link
  • Cornel

    I have only recently come across this provider, never needed it until we received a request from a client. I have looked through all the examples, I now seem to be stuck with the following: On the client 'select change_tracking_current_version()' returns 0 which to me indicates and initial sync. Yet I keep getting the following exception: Unable to complete sync. SetTableReceivedAnchor() had no effect. I have followed the examples closely and I seem to be missing something vital here. Any help would be appreciated

    posted by Cornel utorak, 23 mart 2010 08:25 Comment Link
  • William M

    Indeed, change tracking was designed with "server mode" in mind, ignoring "client mode".

    Paragraph 1) I agree, it is possible (with any current solution) that client operations can be lost if they occur during the sync session of a Change Tracking database. In my personal environment, this is suitable because there is but a single operator per client so during sync, no action can take place. However, the need for this to function is compelling, so I am pondering this solution: Retrieve the last "server change anchor range", and add a column to the anchor that stores this for the next sync. During the next sync, we generate the change list as we normally would, but exclude any server operations based on the last "server change anchor range". Basically: Get all changes since last sync except those that were downloaded from the server during the last session. I will play with the best way to implement this and work on it again later this week.

    Paragraph 2) I am unable to reproduce this flaw when using the mechanism above. If you execute the "SelectIncrementalInserts" and "SelectIncrementalUpdates" commands using the proper mechanisms, you will get appropriate updates and inserts respectively. Note that the results from changetable change based on input. They are "merged/optimized" based on your change request. That is, if you ask for inserts/updates since change 76, you may get one insert and one update. Then, run the same query immediately after asking for inserts/updates since change 77, you may then get zero inserts and two updates. This is likely what you are experiencing. The wrong anchor is being used so it returns the 1 Insert and 0 Update, instead of 0 Insert and 1 Update.

    posted by William M ponedeljak, 22 mart 2010 19:24 Comment Link
  • damon

    There's a problem with the new provider, which can be reproduced as follows;
    Inserts are made on the server, then a sync session begins. Insert some records at the client during the sync process, these changes may be lost.
    This will happen if the inserts at the client are made after the sync provider gets its changes to apply, but before it makes those changes. When this happens the inserts at the client have a lower change version than the synced session inserts, so they are ignored once the sent anchors are updated.

    One solution would be to lock the tables to be updated during the sync process. But really we shouldn't have to do any of this in the first place but for a possible bug in the change tracking system....
    When the sync provider inserts a record on the client the change tracking stores the insert along with its SYS_CHANGE_CONTEXT set to the server ID as it should. Then if you manually update the newly inserted record on the client it stores the update with SYS_CHANGE_CONTEXT set to null, again as it should. But you will notice at the same time it sets the SYS_CHANGE_CONTEXT to null for the original insert. This is why the insert gets echoed back to the server.

    posted by damon ponedeljak, 22 mart 2010 05:24 Comment Link

Leave a comment

Make sure you enter the (*) required information where indicated. HTML code is not allowed.

Na vrh