Case for merge replication?

  • Greetings,

    Trying to figure out the best way to do something and as usual I'm confused.

    We have a database and we wish to use a tablet pc to do 'field work'.

    From our application, user 'checks out' the records they want to work on and transfer to the tablet, user goes does their job, comes back, 'checks in' the records.

    Ok, so from the merge replication point of view, a snapshot of the entire database is created (server is publisher/distributor), tablet comes along and subscribes to the publication gets the initial copy and then any changes made on either side (publisher or subscriber) are synched whenever setup to do so.  Cool.

    The problem comes about in that I dont want to copy the entire database to the tablet.  The database could be from 30mb to 300gb.  The tablet needs a set of core feeder tables (which dont change) and then I only want it to work on the selected records that I 'checked out', and then having finished their work just put those records back.

    The tables I want to check in/out all would have a common primary/foreign key between them.  I know I can simply choose the few tables that I want to replicate but can I filter the replication process based off of a key reference?

    As always, any thoughts would be greatly appreciated

    regards,

    Chris

     

  • I am not sure if I totally understand your question. If the question is can you filter rows of data from a specific table to a specific subscriber, then the answer is yes. You mention a set of core tables that do not change. Do you want to filter data from them, too?

  • Actually its more can I DYNAMICALLY, from the subscriber side PULL the records I want from a publication.

    The core tables are just that, static information that is not editable in the field.  They get pulled over to the subscriber once and then they merge replicate as needed.

    When the user selects the records that they want to work on in the field, I need to do a pull subscription dynamic to what they selected from just 12 tables.  I'm trying to avoid having a HUGE amount of data on the subscriber side.

    I've seen where when you initially create a publication you can specify which rows to include (via. sql select statement) but the initial snapshot has to include all rows since I dont know what the subscriber is going to ask for.

    Chris

  • Yes, set up a filter statement to reflect what you want to replicate where. The merge pull agent will then retrieve only data reflected in the filter.

  • Hi, you don't mention what you are using to develop the front end, if it is .net, then you could extract the required data into a dataset on the tablet, disconnect, do the work and then reconnect and send back the changes

  • The front-end is an ASP application.  Need to be able to push the data back into the main server whenever the user checks the items back in and I dont think I want to write my own conflict manager, which is why I was looking at replication.

    Looking at setting up 2 publications on the server, one for the static tables and one for the tables that the tablet user selects.  I cannot see, however, where the pull subscription has any filtering capabilities, I basically seem to have to pull the entire publication...  The publication can be filtered but I do not know until the user selects the items which rows to filter.

     

  • To use a horizontal filter you can do it two ways. If you set up the publication via scripts use the stored proc sp_addmergearticle and use the @subset_filterclause like this: @subset_filterclause = N'Upper(Terminal.TerminalID) = Upper(Host_Name())'

    or if you use EM then go to the publication, choose the filter rows tab, select the table you want to filter and add the filter clause.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply