Replication of large database

  • I have a large database which produces a large number of transactions.  I have a secondary sql server, which I want to handle all reporting requests, so I want the secondary server to mimic the first server.  It only needs to be one way replication.

    The problem is that the database which needs to be replicated does not have primary keys on all of the tables, which is required for transactional replication. 

    I've tried log shipping, but the size of the logfiles it creates is very large (>1gb), and takes a while to copy over to the secondary server, causing a large latency.

    Does anyone have any other ideas of things I can do to do this?

    Thanks

    -Dave

  • If transactional Replication is not possible and log shipping is too slow you will have to either

    1.try a third party solution like NSI Double-Take (the Software way)

    or

    2. use SAN replication (the Harware way)

    btw: how often are you taking the tlog backups for log shipping?

     


    * Noel

  • You say that the existing database does not have Primary Keys on all tables - are you able to add PK's to the missing tables? Obviously you will need to test the application to make sure it doesn't break but assuming that the developers have not used "SELECT * FROM" type syntax then you will probably be okay - although if there are missing PK's then maybe not   !?

    You also say that it is a large database. How large is that? We are currently looking at applying Transactional Replication (again for reporting purposes) on a 28Gb database - with around 200,000 new transactions per day. Anyone replicated a similar database & is so any comments/suggestions on performance issues (no filtering, no transformations, pull subscription with Dist & Pub on same machine)?

    Regards

    David Saville

    http://www.aldex.co.uk

  • "I've tried log shipping, but the size of the logfiles it creates is very large (>1gb), and takes a while to copy over to the secondary server, causing a large latency."

    We use Log shipping in my company for Reporting Server with Log ranging from 300MB to 8GB. 8 GB takes indeed long time but with proper SAN/Controllers it shouldn't be a problem.

  • Now Litespeed has Logshipping wizard.

    Doing logshipping with Litespeed is super fast!

     

     

  • EDIT: I just realized I posted this to a 2 1/2 year old thread !!   Sunny, Why did you bother dredging this up ??

     

    "...The problem is that the database which needs to be replicated does not have primary keys on all of the tables, which is required for transactional replication.  ..."

    If you add primary keys, they can just be identity fields tacked on the ends of the tables so they don't affect the live data.

    +++++++++++++++++++++

    "... You also say that it is a large database. How large is that? We are currently looking at applying Transactional Replication (again for reporting purposes) on a 28Gb database - with around 200,000 new transactions per day. Anyone replicated a similar database & is so any comments/suggestions on performance issues (no filtering, no transformations, pull subscription with Dist & Pub on same machine)?..."

    We have a 300 Gig database, and replicate about 100 G of that, mostly via transactional replication. We do some small lookup tables with snapshot replication. Performance is fine with us. I used to have issues creating the initial snapshot when refreshing, because it locks tables, so I had to do it during off hours. We have faster servers now, so the initial snapshot is much faster (still needs to be done during off hours)

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply