Transaction log backup file exponentially larger

  • I am having a problem with the transaction log backups for a 23 gig database. I run dbcc reindex on a weekly basis (Saturday evening at 10:00 p.m). The database does not have the most optimized indexing schema. I run a t-log backup every two hours and the backup file is normally around 2 gigs. After I run the reindex, the backup file seems to run into a loop and grows until it fills up the disk it is being backed up to (usually about 22 gigs free) and fails. This in turn causes a repeating problem because then the log grows because it can't backup. We are also doing log shipping with this database. This has happened for the past 2 weekends in a row I can't find any articles that describe this particular problem. Has anyone ever encountered this before? Why would the optimiation job cause my transaction log backups to explode?

  • When you run the DBCC reIndex, are you also executing the log backup?

  • No. The t-log backups happen every 2 hours. The reindex happens between the t-log backups.

  • If you have SQL2000, what type of recovery do you have? If it is Full, then think that the log also saves the rebuilding process of the index.

    Try changing the recovery model

  • It's set to Bulk Logged.

  • Also instead of DBREINDEX try INDEXDEFRAG and do it more often and may fit you needs betetr. Otherwise you will need to swap to a simple recovery model and stop TL backups during DBREINDEX and then change to a FULL recovery model when done and perform a full backup immediately. This can all be scripted.

    From BOL

    quote:


    Unlike DBCC DBREINDEX (or the index building operation in general), DBCC INDEXDEFRAG is an online operation. It does not hold locks long term and thus will not block running queries or updates. A relatively unfragmented index can be defragmented faster than a new index can be built because the time to defragment is related to the amount of fragmentation. A very fragmented index might take considerably longer to defragment than to rebuild. In addition, the defragmentation is always fully logged, regardless of the database recovery model setting (see ALTER DATABASE). The defragmentation of a very fragmented index can generate more log than even a fully logged index creation. The defragmentation, however, is performed as a series of short transactions and thus does not require a large log if log backups are taken frequently or if the recovery model setting is SIMPLE.


    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

    Edited by - antares686 on 10/18/2002 04:28:51 AM

  • That is a good suggestion. However, t-log backups do not run while optimizations are running. After the optimization finishes, the next scheduled t-log backup is the one that has problems. The physical t-log may only have 2 gigs of data in it, but the backup file is huge and fills up the disk, which makes the backup job fail. I've seen the backup file go to 20 gigs on a 2 gig t-log and then fail. This has nothing to do with t-logs backing up during the optimization.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply