Log Shipping for Many Large Databases

  • We have a large environment supporting a third party application for 70 Clients having 70 databases.

    Now there is a requirement for a new reporting solution using readonly copies of live DB's.

    Our current platform is on SQL SERVER 2008 R2 CU5.I have looked into various options available.

    1. DB Mirroring - mirrored databases are not available in readonly mode.

    2. Replication - We have issues with primary keys and GUID's so we cannot implement Replication.

    3. logshipping - over head of maintaining 70 DB's , Network bandwidth, latency.

    Anyone have any thoughts or suggestions for using some sort of log shipping in this scenario?

    cheers

    Murali

  • Large databases themselves is not a problem, how much data would be moving is the question. The more data the harder it is and likely more latency.

    I think you missed that while you can get some read-only access to log-shipped databases the process that applies the transaction logs will cause queries to fail when that is occurring.

    CEWII

  • How up to date do you need the reporting systems?

    A few notes. Replication is great for this, but it can require schema changes. If you don't think GUIDs work for you, why not?

    Mirroring: If you have EE, you can snapshot the mirror, which works, but when you need to update the snapshot (it's a point in time), you have to disconnect users, drop the snapshot, make a new one. Not terribly time consuming, but there is downtime.

    LS: You can read the standby databases, but when the next log needs to be applied, users must be disconnected while the restore (and recovery) run. This can be minutes at times, depending on the size of your logs. That might be an issue, or it might not. If you can update the reporting system 4 times a day, maybe this isn't a huge problem.

    SSIS : Custom work, potentially lots of development, but this can send data over on any schedule you want.

    As Elliot mentioned, the size of the db isn't an issue, except perhaps when you initialize (or re-initialize) your reporting system. It's the amount of changes or data that has to move on a regular schedule.

  • On one of my servers, the reports don't need data from the last 24 hours. So I have scheduled the restore job to run in the morning between 4 AM to 6 AM and the users know that they can not run reports during that time. But the log backups gets copied every 15 minutes waiting to be restored the next day.

    You could have looked at snapshot replication running once every 24 hours but you have mentioned that the total size of all DBs could be significant so it will use a lot of bandwidth. Unless the number of tables to be replicated is really small compared to the actual number of tables.

    Blog
    http://saveadba.blogspot.com/

  • Thanks a lot for your time and suggestions.

    1. We need nearly upto-date data with live DB.

    2. With regard to the replication , lot of the tables donot have any keys ( still there are some heaps).Its been flagged with the third party

    3. I have recommended using a subset of tables to be replicated . The problem with this approach is different clients has different requirement , it will too cumbersome to maintain so many versions.

    4. I would like to take Steves approach. I will try to restore the logs 4 times a day.

    Cheers

    Murali

  • Be sure you publish the times of the restores. I assume you'll automate this, which will mean killing connections, so I'd test the restore times of logs, then allow a few minutes more for the restores and publish those times for people.

  • Our BI team take 6 hourly diffs - there's some disruption during restores - but we use litespeed ( which is quick ) have fast disk subsystems and lots of cpus and memory!!

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Muralijv 51742 (2/14/2012)


    I will try to restore the logs 4 times a day.

    Cheers

    Murali

    In that case log shipping would be perfect

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • I built a system to handle log shipping/restore for a reporting/DR need for a client that had 7400+ databases on a single server, so your number of databases will be no problem to handle. Easy-peasy-lemon-squeezy! 🙂

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Hi Kevin

    Thanks for the update. Do you have suggestions to automate this process.

    Thanks in advance.

    Cheers

    Murali

  • Muralijv 51742 (2/16/2012)


    Hi Kevin

    Thanks for the update. Do you have suggestions to automate this process.

    Thanks in advance.

    Cheers

    Murali

    Not sure what you mean, log shipping is an automated process.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Muralijv 51742 (2/16/2012)


    Hi Kevin

    Thanks for the update. Do you have suggestions to automate this process.

    Thanks in advance.

    Cheers

    Murali

    As Perry says for just 70 databases you can certainly use the built-in log shipping constructs in SQL Server. I had to build a custom system because the metadata handling was onerous for SQL Server with the number of databases I had do deal with at that client. Actually EVERYTHING barfed at the metadata - except SSMS believe it or not!

    Oh, BTW, disaster recovery/high availability are VERY CRITICAL TOPICS!! PLEASE do your company a favor and encourage them to get some professional help to a) define what they actually NEED for RPO/RTO and b) implement the best (and affordable) solution and c) train your staff on how to monitor/respond to problems that may arise. History is littered with companies that did not take care of this type of stuff properly!!!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply