Log file management in simple recovery model

  • Hello Master,

    One of our database is in simple recovery model, and usually generating more than 220 GB log file (.ldf) every week. We are shrinking log file many times to release the space.

    But as its not advisable I am looking for any other options. I suggested to change the recovery model to Full and start T-log backup, but client dont want to change recovery model.

    Is there any way to manage Log file of Simple recovery model to maintain disk space?

    Will full backup truncate log file ?

  • There is nothing you can do to the log in simple recovery model.

    A full backup will not truncate the log for you. Only log backups will manage the log

    My advise would be to find out whats causing the growth and tune accordingly.

    I would also recommend the following book[/url]

  • I think the only reason for this is a long running transaction. Its a heck of a transaction though ...

    I would put something in place that recorded (to a table) the size of the TLog file, and then Schedule that frequently - maybe 10 minute interval would do, perhaps it would need to be 1 minute interval - and then, hopefully, see when the TLog file grew and then associate that with the task that was running at the time.

    Maybe it is some massive UPDATE which is twiddling a Hierarchy-Child-Row-Counter, or somesuch, and then that one process could be "improved"

    You could probably get the information about SQL Log File Extension from Perf Mon, but personally I'd schedule a SQL statement to log it to a table.

  • Is the database being replicated? The log can't be cleared until any transactions have been replicated.

  • Kristen-173977 (9/15/2015)


    I think the only reason for this is a long running transaction. Its a heck of a transaction though ...

    Possibly an index rebuild, thought they're minimally logged in simple, so maybe not. A large delete perhaps.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • jitendra.padhiyar (9/15/2015)


    But as its not advisable I am looking for any other options. I suggested to change the recovery model to Full and start T-log backup, but client dont want to change recovery model.

    So you're advising to change to a recovery model that will likely cause the log to grow more than it currently does?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hello Masters,

    Thanks for your views.

    1. No this database is not replicating.

    2. We are not concern which transactions hitting more records.

    3. We need solution to overcome disk space crunch instead of disk space extension.

    4. I each time shrinking log file after full backup to get rid of huge log files.

  • Then you need to identify the large transactions that are causing the log to grow (about the only thing that will in simple recovery) and fix that. If you can't do that, you need to accept that the log needs to be that size (it's not uncommon) and give it enough disk space.

    And stop shrinking the log. It's not fixing anything. It's like complaining that a 5 litre bucket gets filled with rain water every time it rains and deciding to solve the problem by replacing the 5 litre bucket with a 2 litre bucket.

    Please see Anthony's recommendation.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • How big is the Data file? If it is smaller than the 220GB Log File I'd go looking for the transaction that is causing the log file growth. I would regard that situation as "unusual"/

    If the Data File is massive, compared to the 220GB Log File, I'd just leave the log file at that size (I'd regard that as "normal") - you are unlikely to be ale to change anything that will reduce it (you might, but the effort is probably not worthwhile, and "something else" will probably cause a log growth to similar size "soon")

  • Look for a maintenance plan that is rebuilding indexes. I believe it would rebuild all indexes on the selected db every time, without regard to size or how fragmented they were (I'm not 100% sure as I don't use maintenance plans, but I believe that is how they work). If you find that, get rid of the rebuild in a maintenance plan and use a better method for that.

    Since you're seeing this happen consistently, I'm guessing it's something like that, a very big task that is occurring regularly.

    SQL DBA,SQL Server MVP(07, 08, 09) "Money can't buy you happiness." Maybe so, but it can make your unhappiness a LOT more comfortable!

  • ScottPletcher (9/15/2015)


    Look for a maintenance plan that is rebuilding indexes

    Won't an index rebuild CHECKPOINT after each index? - and thus would I be right in thinking that this would represent the rebuild of a single (largest) index of 220GB? Seems "Quite Large" to me :hehe:

  • Kristen-173977 (9/15/2015)


    ScottPletcher (9/15/2015)


    Look for a maintenance plan that is rebuilding indexes

    Won't an index rebuild CHECKPOINT after each index? - and thus would I be right in thinking that this would represent the rebuild of a single (largest) index of 220GB? Seems "Quite Large" to me :hehe:

    Can't imagine why SQL would automatically issue a checkpoint just because an index is rebuilt. I have no idea whether a maint plan does or not, although, again, I can't imagine why it would.

    SQL DBA,SQL Server MVP(07, 08, 09) "Money can't buy you happiness." Maybe so, but it can make your unhappiness a LOT more comfortable!

  • ScottPletcher (9/15/2015)


    Can't imagine why SQL would automatically issue a checkpoint just because an index is rebuilt. I have no idea whether a maint plan does or not, although, again, I can't imagine why it would.

    So a maintenance plan is going to perform ALL its index rebuilds (which for the standard maintenance plan may well be?? a rebuild on every index on every table in the DB) as a SINGLE transaction?

    Seems improbable to me that they would do that ... but might well be the case!

  • Kristen-173977 (9/15/2015)


    ScottPletcher (9/15/2015)


    Can't imagine why SQL would automatically issue a checkpoint just because an index is rebuilt. I have no idea whether a maint plan does or not, although, again, I can't imagine why it would.

    So a maintenance plan is going to perform ALL its index rebuilds (which for the standard maintenance plan may well be?? a rebuild on every index on every table in the DB) as a SINGLE transaction?

    Seems improbable to me that they would do that ... but might well be the case!

    I'm pretty sure beginning a new transaction doesn't require that a CHECKPOINT be issued. Presumably the heavy logging activity could/would cause a checkpoint, but a new transaction wouldn't per se.

    I would guess that a MP would do each rebuild as a separate tran, but I don't really know either.

    SQL DBA,SQL Server MVP(07, 08, 09) "Money can't buy you happiness." Maybe so, but it can make your unhappiness a LOT more comfortable!

  • ScottPletcher (9/15/2015)


    I'm pretty sure beginning a new transaction doesn't require that a CHECKPOINT be issued. Presumably the heavy logging activity could/would cause a checkpoint, but a new transaction wouldn't per se.

    I would guess that a MP would do each rebuild as a separate tran, but I don't really know either.

    Thanks, that was my thinking too. I'm still wrestling in my mind with the Log being 220GB. That seems a lot for, say, just-one-index-rebuild (or even "several") - unless the database is absolutely humongous (in which case I would expect that the log needs to be that big)

    My money is on a transaction not committing for a long time, or even being KILLed or eventually the client disconnecting from a timeout, or a RBAR loop / cursor doing some massive updates / deletes.

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply