Sync Services FAQ

Q: Will Sync Services or SQL Server Compact Edition be in the .NET Framework, or .NET Compact Framework?

A: No, we are not shipping within either framework, but rather shipping as an add-on component.  Why?  Because we wanted more flexibility with our ship schedule.  SQLce will ship 2-3 times between .NET 2.0 and .NET 3.5.  While it would be nice to ride the distribution of the frameworks, with the embedded/private deployment options of SQLce and Sync Services, we felt it was better to have more flexibility with our schedule.

Q: Will Sync Services ship on both the desktop framework and the .NET Compact Framework?

A: Yes, but at different times. At current, March 16th ’07, we are scheduled to ship for the full framework, but we are not planning on shipping Sync Services for the device platform in the Orcas product.  We do plan to ship the client components for Sync Services soon after Orcas, but are still working out the schedule.  The problem is the various .NET Compact Framework teams, including the Visual Studio for Devices teams and Sync teams have a lot of work to manage with many different device platforms and a very short schedule, and we haven’t been able to get all the appropriate ship level test coverage complete.  We have designed, and done preliminary testing with the client components working and synching over Web Services.  We are still hopeful we can pull it in, but at this point, we’re just not ready to commit to Orcas, but rather shortly thereafter.

Q: When will Sync Services ship?

A: Sync Services for ADO.NET will ship at the same time Visual Studio Orcas ships.  This is currently scheduled for Q4 2007.  Note: This is not meant to be the official place to get the timeframe for Orcas, but rather just saying that our current plan is to ship Sync Services within SQL Server Compact Edition 3.5, which will ship with Orcas.

Q: How are deletes purged?

A: On the server, deletes are either kept in a tombstone table, or simply tracked by some sort of active/status flag in the primary table.  Since this version of Sync Services isn’t tightly coupled to SQL Server, we actually don’t do anything.  In general, we’d expect the DBA to write a scheduled task to purge tombstone records on their determined interval.  You can expect us to do more in the “future”, tease, tease, tease…

On the client, SQLce purges deleted records once it confirms data has been sent to the server.

Q: Can I purge old data on the client without triggering a delete on the server?

A: Yes.  While we don’t have a simple API to do this today, you can delete a bunch of rows on the client based on what ever criteria you decide, than simply “AcceptChanges” on the client prior to these changes being sent to the server.  Of course you could intercept these on the server an toss deletes as well to protect your server data.

Q: Does Sync Services support low bandwidth type sync scenarios?

A: By low bandwidth, I mean can I synchronize only the important things now, and catch up later.  Yes.  You can either upload only, download only, or synchronize just a particular SyncGroup based on your own logic at the time.

Q: Does Sync Services support batching for large data sets?

A: Yes, but not quite yet.  We initially scoped this out of the first release, but we believe we’ll be able to get it in, so look for it sometime around March 07.

Q: How does Sync Services track changes?

A: Sync Services uses an Anchor based model.  Each time a sync operation occurs it gets a reference mark from the server.  It could be the servers DateTime, or a TimeStamp (RowVersion).  The client saves that value for the next sync operation.  Each time the client synchronizes a particular SyncGroup, it first requests the server anchor.  It than executes the queries on the server using the last anchor as the low range, and the new anchor as the high range.  This gets a consistent set of changes across several queries.  In future releases of the Microsoft Synchronization Platform we’ll be supporting a knowledge based sync model as well as the anchor based model discussed here.

Q: Can I update everything in a single operation, or can I control things more granularly?

A: Within the SyncAgent, you can utilize the SyncGroup to determine the grouping of updates.  In the previous example, you may choose to put all the lookup tables in their own individual groups.  If the connection drops while you’re synchronizing your lookups, it can pickup where it left off next time it synchs.  However, when synchronizing Orders, you probably don’t want Orders to ever go up/down without OrderDetails.  Simply put the Orders and OrderDetails table in the same SyncGroup, and you’re all set.

Q: How are constraints, keys, and other db objects brought down to the client?

A: This again falls in the category of Sync Services is about synchronizing data, not replicating a database.  Sync Services does do some schema and even database creation with SQL Server Compact Edition.  If you’re starting from scratch, and you first synchronize, the SQLce database will be created based on the connection string properties, name, encryption, password, etc.  It will then create all the tables the client has said they’re interested in.  Remember, just because the server exposes 20 tables, doesn’t mean the client must use all of them.  The client determines which tables it wants to consume with the SyncTable collection.  When the tables are created, primary keys are created, datatypes are mapped to the clients datatypes, and nullability is applied.  No additional indexes, constraints, defaults, etc. are applied.  There are SchemaCreating/ed events fired where you can either initial create the schema to be used, or alter the schema after the tables are created.

Q: Does sync services handle parent/child/grandchild relationships?

A: Yes.  Unlike RDA where you can only sync one table at a time, Sync Services handles the hierarchical nesting of inserts, updates and deletes.  In fact you can even control it seperatly on the server from the client.  On the server, tables are placed in the SyncAdapter collection.  The order of the SyncAdapters defines the order by which updates will be applied.  Inserts and Updates are done from the top down, while deletes are done from the bottoms up.  The same is done on the client, in the SyncTables collection.  This allows the server to control its order, while allowing the client to control its order of updates

Q: How are schema changes handled?

A: Unlike merge which is geared around replicating a database, Sync Services is geared around synchronizing data.  I’m not a big believer that generally speaking the DBA simply adds a column to the server and the UI automatically updates on the client and life is good.  While it can be done, most of the time I’d bet you want some control over where and how the new element is displayed, add some interaction logic to the client, tab order etc.  We really treat schema updates as an app update.  It’s a holistic update of the app overall.  The model we’ve gone with the Sync Services are the following:

  • A new requirement is defined, say AddressLine3.  The DBA would add the column to the server.  All the normal rules apply.  If the column is non-nullable, than a default should be provided.
  • The developer involved with the sync layer would most likely create a new version of the Sync Service, say v2.  This means that apps that were using v1 can be slowly migrated, or at least be migrated within some level of control.  If the user is in the middle of an important deal, the last thing they need is a forced software upgrade.  Ever been in the middle of something important and IT forces an update that reboots your computer or app?  Software is an enabler, it should help me achieve my goals, not fight me because IT thinks its important now.
  • The app developer updates their service proxy to point to v2 of the sync service, exposing the extra column.
  • In the version check code, the app author can either choose to reset the table, or they can execute the alter table script locally adding the additional column.  They may even bring down a single data call to retrieve the values for the new column on all the existing rows.
  • The developer than decides what they want to do with the new element, updating their ui, logic etc.

So, while we didn’t implement something as simple as point click, we think it tends to fit the SOA model where apps may consume services from other apps, and they should have control over how and when they consume new schema.

Q: How are constraints, keys, and other db objects brought down to the client?

A: This again falls in the category of Sync Services is about synchronizing data, not replicating a database.  Sync Services does do some schema and even database creation with SQL Server Compact Edition.  If you’re starting from scratch, and you first synchronize, the SQLce database will be created based on the connection string properties, name, encryption, password, etc.  It will then create all the tables the client has said they’re interested in.  Remember, just because the server exposes 20 tables, doesn’t mean the client must use all of them.  The client determines which tables it wants to consume with the SyncTable collection.  When the tables are created, primary keys are created, datatypes are mapped to the clients datatypes, and nullability is applied.  No additional indexes, constraints, defaults, etc. are applied.  There are SchemaCreating/ed events fired where you can either initial create the schema to be used, or alter the schema after the tables are created.

 

Q: Does Sync Services support multiple publications

A: Yes/no.  Sync Services doesn’t utilize the pub/sub model per se.  You can configure the server provider to offer 20 tables you want to synchronize.  The client simply say it cares about 3.  Another client cares about a different 3.  Another client cares about 4, which overlap the first two clients.  In fact, we also support a client dynamically adding tables.  A sales person may cover another sales person for a week and needs to bring in an additional product line.  Within the app, the developer can change the filtering query, and off they go.

Leave a Reply

Your email address will not be published. Required fields are marked *