Log shipping VS Replication

  • I think log shipping does make sense for you. Its simple and effective, and given the allowable latency, a great fit. From what I've seen clustering (and replication to a degree) both represent more 'complexity'.

    Rootman - the reason you can lose data with log shipping is the delay between backups. With transactional replication average latency to ship a transaction from publisher to server is probably 5 seconds. Unplug the publisher, you lose the last 5 seconds of updates. Run a log backup, 14 mins later pull the plug, you lose 14 mins of data.

    Andy

    http://qa.sqlservercentral.com/columnists/awarren/

  • You need the Ent Ed if you want to do log shipping via enterprise manager. You can do simple log shipping with standard ed. I am using standard edition and have implemented stored procedures to handle the tran log backups and apply them on another server. If someone is in the db when trying to apply the log you will error out unless you kill their connection or error trap. How does enterprise edition handle this?

    If you are looking for a near real time reporting environment and not a warm standby, replication will be better in my opinion.

  • Another factor for the database which not allowed to modify table structure for application reason such as Microsoft Business Solution(Great Plains), log shipping is the option for standby reporting purpose as well as failover location.

    Also for the higher CPU consuming reports run against reporting server makes the production server running smoothly. I have big benefit on this.

    jiema


    Jie Ma

  • An item that SkiMan mentioned is why I was told I shouldn't have the users run reports off our BCP server that is managed through log shipping. I was told that when they applied the logs they kill any connections. I've only heard this mentioned once in this thread. Is this an accurate statement or is there a way around this? I can't really have users trying to run reports only to have their session killed halfway through one.

  • lambje,

    The great thing about log shipping is you can set a delay. What I mean is that you can have the primary server backup every 5min, then copy every 5min to the standby but you don't need to restore every 5min. You can set it up that the file must be a certain age before it is restored. The data is protected on the standby, its just not restored up to the minute. By doing this you create a window for users to run reports say like 9:00am to 5:00pm during the day then you can run the restore all night.

  • You really need to analyze and define your needs. If you have any thoughts of using the failover server for just that failover, then I would not consider it as a reporting server. For the occasional quick adhoc queries, and report, having the standby database server there an available is great stuff. However the minute your user community gets into a habit of using the data on a daily basis for reporting - will be the day you'll need to start shopping for a new standby server.

    The only real downside with using log shipping as a report server is that you have no control over the database. You might find that you need the capability of adding new indexes that wouldn't be practical in other system such as OLTP.

    From a cost and management perspective I prefer log shipping over Clustering. There's fewer moving parts, less hardware, easily scripted, and in my case allot cheaper. Now for 10T database - I might revisit it, but with the prices on storage these days who knows, I'd probably log ship that as well.

    In theory I could have my production server on the East coast and my standby server in Japan. Try clustering that!

    anyway that's my take on it,

    Hey anyone looking for a decent SQL DBA out there on the East Coast, I hear the axe is swinging here and contractor are first on the list.

    regards

    John Zacharkan


    John Zacharkan

  • This is an extremely interesting thread! Hopefully it is still being monitored. I have two questions for the group.

    1.)

    Let's say I have two production servers (SQLA, SQLB) and want to have both Log Ship to a single failover server (SQLFail).

    If SQLA needs to failover to SQLFail, can SQLB still use SQLFail as it's failover?

    Also, once SQLA comes back online, is it advisable to use it as the new failover for the portion of the business that is now running on SQLFail?

    In that scenario you would have SQLB log shipping its databases to SQLFail and SQLFail logshipping to SQLA. Is that feasible or am I off my rocker?

    2.)

    In the same scenario above, both SQLA and SQLB are publishers in transactional replication using a separate distributor. If they failover to SQLFail, do those databases on SQLFail now become the publisher? Do replication scripts need to be run again or is it relatively automatic?

    Thanks for any assistance and I apologize if it doesn't make much sense. If I could attach a Visio drawing or something I would. I appreciate all of your expertise and willingness to help!

    Adrian Porter


    Adrian Porter

  • Just a comment on using Transactional replication...you can set up latency just as in Log Shipping in one of two ways:

    1) Use continuous transactional replication with a "-PollingInterval" of 300 seconds for example. (5 min.)

    2) Set the subscription to update on a periodic schedule, e.g.-every night at midnight. All transactions will be queued in the "distribution" database until delivered to the subscriber.

    -Dan


    -Dan

  • We do a *different* kind of replication.

    We have three sites all connected via T1 lines. We have had the worst time getting the MS replication to work across the WAN to 2 offsite locations. We came up with our own solution. After hiring some consultants, and paying MS to trouble shoot it, they all came back with "it ain't gonna work, stop trying".

    We are lucky to have a linux guru on staff at our company. He has set up an RSYNC server on a Red Hat 7.3 box.

    For those of you who are not familiar with RSYNC, it works like this...

    Linux box mounts drives at all locations.

    RSYNC takes a hash of the the mdf file and compares it to a hash of the same file on the destination computer.

    it reads from the active database, and writes to a read-only unlocked file at the other two sites.

    every 5 minutes or so, it takes a new binary hash of the files and compares them

    if the source mdf file differes from the destination mdf, only the bits in the file that are different are transfered.

    in other words, we have a 5 gig database. initially that 5 gig database is rsynched over to the other servers. once the initial huge file transfer has taken place, the read-only file gets updated, without having to copy the whole file over again. RSYNC does a byte-by-byte comparrison, and just sends the updated bytes over the wire. This has been a life saver for us.

    As an added bonus, when using RSYNC server, go can tell it how much badwidth to use. For example during business hours we say "RSYNC, do your job, but don't use more than 60k per second of our T1".

    Now that is smart computing. We get the benefits of using MS SQL, and the benefits of open source software to meet a challenge MS said could not be addressed.

    Edited by - kapcreations on 03/14/2003 11:05:06 AM

  • You can do log shipping and leave the DB in read only mode so it can be queried.

    All in all replication is very good for real-time synchronisation but you have to consider the recoverability overhead when replication falls down. Log shipping is much easier to manage when you need to recover from failures,

  • I was very excited about log shipping until I read it is available in Ent. Edt. only.

    I only have Std. Edt., and was wondering if replication is my only choice?

  • SQL 2000 Resource Kit has a Simple log shipping version that can be installed on Standard addition.

    Greg

  • Yes you can run a form of log shipping on standard edition. Either from the old Back Office Resource Kit (BORK) scripts, the 2K resource kit or write your own! Don't forget that log shipping was developped by DBAs independantly of Microsoft and was only incorporated by them when they realised it was a much more simple and robust way of creating a warm standby.

    In contrast MS's own "swiss-army knife" style technology has a bit of the look of a committee-designed package that tries to do to many things at once. As a result it is complex to set up (N.B. it _is_ possible to set up replication so that it doesn't add columns and triggers all over the shop but you have to know how to do it and disabling the default triggers is a real pain). Worst of all, unless you are using the "immediately updating subscribers" option, there is nothing stopping DML on the subscriber changing the replicated data willy-nilly which is unacceptable in a financial services environment where there are regulations about data security. Finally replication has so many ways to go wrong, some of them "silent" - i.e. no alerts are raised and it's a devil to detect. Plus, from experience, replication falls over a lot more often than log shipping. Frankly it's a bag of spanners.

    For a low cost (if you use standard ed options), low maintenance and reliable DR solution, log shipping beats replication hands down. Sadly the inability to load transaction log backups while there are query connections to the database has not been overcome - unlike Oracle 9i where they have sorted this so that you now can run DR and MI on the database. Perhaps the Yukon team will have a look at the competition and aim to include this vital feature in the next release, but, given how little they improved the basic log shipping for SQL 2K Enterprise (they still haven't sorted the master..syslogins issue properly IMO), I wouldn't hold your breath...

    BTW you can crank log shipping down to 5 mins data loss with a bit of care. Also, despite all the negatives I just pointed out, replication remains the technology for replicating subsets of data, the ETL process for data warehousing and having updateable copies of a database across multiple servers (although this is messy).

  • I found the Simple log shipping version and was wondering if those that have used it have any lessons learned or suggestions from their experience.

  • paulbowman:

    I'm curious where you got the information that it was developed independently of Microsoft.

    Cheers,

    Ken

Viewing 15 posts - 16 through 30 (of 45 total)

You must be logged in to reply to this topic. Login to reply