Transaction log file too large

  • Deana,

    Your large Monday tran log backup may be due to a maintenance plan optimizations job (rebuilding all the indexes) - by default it will run at 1am Sunday if someone turned it on. You could try rebuilding just the indexes that need it most, but that requires more work, and may not be worth the hassle.

    Even if you can shrink the log, it doesn't mean you should. If the log is going to expand again to 28GB each Monday, then shrinking it in between is just going to cause disk fragmentation.

    Sometimes you just have to ask for/demand more disk space. You also should consider special backup software, such as SQL Backup by RedGate advertised on this site, that will allow you store about 4 times more backups on the same space and perform the backups in about 25% of the time (maybe more and faster, depending on the database).

    Bill

  • Thanks for that info. I have been pursuing a two-pronged strategy of more disk space and reducing the transaction log size. I should get the extra drive next week.

    I can resign myself to these large transaction log backups -- especially with the extra drives -- , but I'd still like to know why it is growing. Any thoughts?

    In general, I do the following in the maintenance plan:

    1. DBCC check

    2. rebuild indexes

    3. statistics

    4. backup

    5. clean up history

    Thanks in advance for your help,

    Deana

  • It might be the rebuild. That usually requires 1.5 times disk space in the data file and records a lot of log changes. Can you snapshot the log file size just before and just after the index rebuild?

  • Deana Sanders (10/2/2007)


    ... I should get the extra drive next week. ...

    Wonderful. Putting the ldf on a different physical drive from the mdf will give you a performance boost too. If you can put it on a different disk controller as well, then, sweet. 🙂

    ATBCharles Kincaid

  • Steve Jones - Editor (10/2/2007)


    It might be the rebuild. That usually requires 1.5 times disk space in the data file and records a lot of log changes. Can you snapshot the log file size just before and just after the index rebuild?

    Also (I think this has been mentioned several times in this forum), if you have the database set to Full recovery model, is it possible for you to switch to Simple recovery model just for the maintenance, and then switch back to Full? You need to find out whether that is something that would be allowed in your company, but if it is possible, it will keep the log from growing a lot during those maintenance plans.

    Best,

    webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • Good suggestion there. I have done this with SQL Server 2000 maintenance plans, but it tested poorly with SQL Server 2005 on the development server. The maintenance plan on one instance worked, but the second did not. This may be due to a dbcc table space id error on one of the data layers that will be resolved soon.

    Since this is GIS data, my other option is to have SDE do it's own stats and rebuilds and to see what kind of log size that would bring.

    Shrinking the ldf files worked last night so I got rid of the VLFs, but the database still grew afterwards.

  • We have a db for which the t-log grows by 10's of gigabytes overnight (wonderful PeopleSoft processing). Don't want to repeatedly shrink the file, so I'm experimenting with the SQL Server Alert capability on the t-log. I set the alert max fill percentage at 60%. Created a test process that loops repeatedly inserting an 8K row plus identity column in a test table. Within a loop, inserting 100K rows in a transaction, the deleting same rows in another transaction. Looping through this about 10 times generates large t-log volume very quickly. (Use DBCC SQLPERF(Logspace) to watch it fill up!) The SQL Alert calls my regularl t-log backup that is scheduled every 4 hours as a fallback. I'm able to automatically trigger a log dump as frequently as three minutes as growth requires, and prevent the huge t-log auto growth. Not entirely sure of the results of potentially overlaping log dumps yet, but so far I haven't been able to make it fail.

  • Deana Sanders (10/2/2007)


    Good suggestion there. ...

    Thanks, although I can't take credit for it. Someone else recommended it to me earlier. I hope there is another way to work around the problem if you can't use that method.

    Sincerely,

    webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • We also found that reindexing a db with 40000 tables eats log space. Optimized the process by querying sysindexes for indexes with more than xxxxx rows, then ran DBCC SHOWCONTIG on only these, followed by a DBREINDEX only of those indexes with more that xx% fragmentation. This saves hours of time and gigabytes of log space.

  • Be careful with switching modes. Don't want to mess up your backup chain.

  • The plan to move between simple recovery for dbcc, stats, rebuilds, etc. and full recovery is testing poorly again. I did find out that you can do this as long as you perform a full backup after switching to full recovery. No new disk drive slots on the servers.

    I may have to go to plan C and write some scripts to do index rebuilds and stats via SDE.

    Thanks to everyone for your suggestions and comments.

    Deana

Viewing 11 posts - 31 through 40 (of 40 total)

You must be logged in to reply to this topic. Login to reply