inserts and updates only

  • hi,
     
    this is my first post. i am sorry if this is a stupid question.
     
    i have a set of tables on seperate servers that need to receive immediate updates and inserts, but not deletes. in other words i only need to append new data or data that has been changed, not deleted. is this possible with transactional replication or is there a better way of doing this (i.e. setting up manual triggers, eeek) ?
     
    thanks so much for your uber admin insight.
     
    Chad
  • I'm not the replication expert, but I had a similar situation years ago in v6.5. I used a trigger to move the inserts and updates to a separate table. Then I replicated that table to the other server.

  • My approach would be different in that I would continue to use the replication stored procedures on the subscriber server, but I would edit the delete stored procs to take no action on the subscriber. I've never done this before, but I believe that it is entirely possible. This would make the most sense for me, as replication would continue to work, except that the deletes would not be processed on the subscriber.

    The only thing you;d have to remember to do is re-edit the delete stored procs in the event that you have to regenerate the replication stored procs on the subscriber.

    Hope this helps.


    "See how deep the rabbit hole goes..."

  • That is a creative idea morpheuz. I had the same idea before this post. Now that another DBA brought it up I decided to test it. Here's what I did:
     
    - Created a Transactional Publication w/ immediate updating.
     
    - Publication Properties > Articles > Table Article Properties > Commands > Verified 3 boxes are checked to replace INSERT, UPDATE, DELETE commands with stored procs.
     
    - Commented out the code in the UPDATE and DELETE procs on the subscriber DB and added a simple statement (SELECT @@Version).
     
    - Inserted some data into the published table and monitor Log Reader Agent to ensure no errors occur.
     
    - Verified the new inserted data made it to the subscriber table.
     
    - Deleted some data from the published table and monitor Log Reader Agent to ensure no errors occur.
     
    - Check to see the deleted data still exists on the subscriber table. It does!
     
    - Now, the real test. When I insert data in the published tables will the ID's conflict in the subscriber table?
     
    - No, new inserts generate a new ID since they are treated as transactions and the ID's are auto incriment.
     
    Hooray! Does anyone else see potential problems with this? For now I will play it safe and go with Steve's idea until further testing is done just to be safe.
     
    Thanks a lot !
    Chad
  • Hi Chad, Guess who I am

    Before I go over both methods about pro and con, I have to say, I don't have good solution either. I am just trying to telling you what might be you are going to be against when you choose either way.

    Steve's way certainly works only potential problem might be locking but if you have secondary table on the same server, most likly, it won't happen. Another bad thing that I can think is that now you have three set of same data. Two sets on publisher and one sets on subscriber. So wasting the space here.

    Morpheuz way is actually really slick way and like Mr. M mentioned, you have to remember to do is to re-edit the delete store procs in the event and that has another potential problem as well. If something happens to the subscriber database and you have to resnapshot, you will loose all not-deleted data and get the duplicated data from publisher again. Even though Morpheuz way is better at performance wise, to have the data that you want, you need to do Steve's suggestion.

    Since Trigger does lock while it is inserting to other table, you might want to change your app to insert,update and delete. If you are using store proc, you can just insert into two different table for that purpose and have manually move over the data to subscriber every minitues if you can affort some time gap if not, setup the continuose replication on the 2nd table, etc.

    Good thing about moving to subscription database maually is that you can remove the data that you inserted from publisher so that you don't have to keep three sets of data.

    Sorry about not giving a good answer but if you don't have to worry about space, I'd recommend the Steves way.

     

     

  • Agree that either way will work, though I like modifying the proc more than using the trigger - cheaper to not do the work. You could also use an instead of trigger on the subscriber table and just throw away the commands there.

    If you do the subscriber trigger or the proc change, it will break the next time you snapshot unless you add a post snapshot script to put the changes back.

    Also, if you snapshot you'll "lose" the records on the subscriber that were deleted from the publisher before. Might need some type of pre snapshot script to move those out and put back. Convoluted, but you can make it work.

    As I think about that, I think the instead of trigger might wind up being cleaner, you could move the deleted rows to a separate table on the subscriber, just use a view to combine with the real table. Then you have no snapshot headaches (other than needing to apply the trigger each time).

     

  • I am trying to implement the concept of only pushing inserts and updates but not deletes.

    I have written a "pre snapshot" script.

    Basically, execute a sp_rename for each table in the snapshot.

    exec sp_rename 'table1', 'table1_backup'

    Then execute a new snapshot.

    The question is what is the fastest way to push the data back in to the replicated table??

    In my situation, the volume of data in the publisher will be a small fraction of the volume of the data in the subscriber (think daily data vs. lifetime data).

    So I imagine that I would be better off moving the data from the newly created snapshot into the renamed table rather than the other way around.

    First -Turn off Log Reader

    Second - DELETE FROM table1_backup WHERE id IN (SELECT id FROM table1)

    Third - INSERT INTO table1_backup SELECT * FROM table1

    Fourth - sp_rename 'table1_backup', 'table1'

    Fifth - Turn on Log Reader

    This isn't working due to the following error:

    Server: Msg 15051, Level 11, State 1, Procedure sp_rename, Line 234

    Cannot rename the table because it is published for replication.

    Anyone know any way around this?

    Thanks,

    -jmr

     

     

     

  • There is no need to implement a custom solution. Have a look at the article properties, Commands tab, and type NONE for the 'Replace delete commands with...' section.

    Rgds,

    Paul Ibison, SQL Server MVP


    Paul Ibison
    Paul.Ibison@replicationanswers.com

  • I am using the option to ignore delete given at that tab. I'm more concerned with generating a new snapshot if, for any reason, the entire process needs to be restarted.

    In the article properties, Snapshot tab, there are a number of options:

    Keep existing table unchanged.

    Drop and recreate existing table

    Delete data in the existing table that matches the row filter statement

    Delete all data in the existing table.

    I'm guessing that these correspond with sp_addarticle: @pre_creation_cmd options ('none', 'delete', 'drop', 'truncate')

    I will run some tests to see if the 'NONE' option does what I'm looking for.

    Thanks,

    -jmr

     

     

     

  • OK - as you are not replicating the deletes, you don't want to remove the accumulated data at the subscriber? In that case the 'Keep existing table unchanged' would be the desired one (much like multiple publishers/one subscriber).

    Rgds,

    Paul Ibison, SQL Server MVP


    Paul Ibison
    Paul.Ibison@replicationanswers.com

  • I altered my scripts to reflect that option.

    Now, I can run the snapshot, but when the distributor starts, I get the following error: "The process could not bulk copy into table '<TableName>' "

    I found the following kb article: http://www.kbalertz.com/kb_885460.aspx#toc

    This says that if I upgrade to MDAC 2.8 it will fix the problem.

    MDAC 2.8 sp1 comes with MSSQL SP4 and I have SP4 installed, so I am a little skeptical.

    reagardless, I downloaded and installed MDAC 2.8 sp1, I'll see what happens next...

     

  • Hi, still no luck.

    I looked into the distibution db to determine the actual line that is causing the error: 

    sync  -t"test_requests_floors"

            -d"c:\Program Files\Microsoft SQL Server\MSSQL\ReplData\unc\SQLDEV2_tbis_dev_requests\20050614203617\test_requests_floors_12.bcp"

             -hORDER( [request_id],[floor_id] ASC)

     

    I even tried to run the original script that I had with

    @pre_creation_cmd options = 'drop'

    Then I update that option through Enterprise Manager. That resulted in the same error.

    What's going on here?

    -jmr

     

     

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply