Delete Query and 'Transaction Log is full' Error

  • We have this simple delete query:

    DELETE

    FROM Log_Hist

    WHERE (LogTime <= DATEADD(day, - 90, GETDATE()))

    The query is bombing out with this message:

    Msg 9002, Level 17, State 4, Line 2

    The transaction log for database 'Logs' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

    So I ran that query and in the 'log_reuse_wait_desc' column, I saw the value 'NOTHING'.

    The database is set to simple recovery mode. For autogrowth, file growth is set to 200MB. Restricted file growth is set to 14,000MB.

    Please advise on how best to address this. This query was working before. Thanks in advance to all who reply!

  • One thing you can do is to shrink your log file. Since your recovery mode is Simple, you should be able to shrink the size of your log file to the initial size. Another thing you can do is to limit the number of records deleted at a time by specifying the TOP clause with your DELETE statement.

    Regards,

    SQL Server Helper

  • rfrancisco (7/6/2011)


    One thing you can do is to shrink your log file. Since your recovery mode is Simple, you should be able to shrink the size of your log file to the initial size.

    The log is not big enough so you want to make it smaller? Why?

    Shrinking logs is not something you should do regularly and for this error it will make things worse. If the log is smaller, it will fill up faster and the error will come sooner.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • dso808 (7/6/2011)


    Msg 9002, Level 17, State 4, Line 2

    The transaction log for database 'Logs' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

    So I ran that query and in the 'log_reuse_wait_desc' column, I saw the value 'NOTHING'.

    The database is set to simple recovery mode. For autogrowth, file growth is set to 200MB. Restricted file growth is set to 14,000MB.

    What happened there is that the delete ran, filled the log and rolled back. By the time you checked sys.databases, the delete had rolled back and a checkpoint had run so there was nothing at that point that was holding the log active, hence the NOTHING reason

    Deletes take a lot of log space. Can be more than twice the size of the data being deleted. Could be you need to remove that restriction on the file size (can be a dangerous thing to have)

    Other option is to do the delete in batches. DELETE TOP (10000) ... in a loop until all rows are gone.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • What happened there is that the delete ran, filled the log and rolled back. By the time you checked sys.databases, the delete had rolled back and a checkpoint had run so there was nothing at that point that was holding the log active, hence the NOTHING reason

    Deletes take a lot of log space. Can be more than twice the size of the data being deleted. Could be you need to remove that restriction on the file size (can be a dangerous thing to have)

    Other option is to do the delete in batches. DELETE TOP (10000) ... in a loop until all rows are gone.

    --------------------------------------------------------------------------------

    Hi Gail and thanks so much for your response. OK, the disk where our transaction log file is stored only has about 13GB of space available. Last time I ran the query, after it failed, I checked the disk space and it was down to 9.9MB available. So I guess that removing the restriction on the log size won't help in this case, correct?

    Your other option would probably be better for us due to our lack of disk space. So using the loop would not fill up the transaction log? Is there a way to script a delete that does not write to the transaction log at all? That would be preferable for us.

    One more thing - could you please provide me sample script of a delete loop? Never done that before. Thanks again! Much appreciated...

  • dso808 (7/7/2011)


    Hi Gail and thanks so much for your response. OK, the disk where our transaction log file is stored only has about 13GB of space available. Last time I ran the query, after it failed, I checked the disk space and it was down to 9.9MB available. So I guess that removing the restriction on the log size won't help in this case, correct?

    Err, no... 🙂

    Your other option would probably be better for us due to our lack of disk space. So using the loop would not fill up the transaction log?

    Since you're in simple recovery, you can just run a checkpoint after each mini-delete and it'll mark the log as reusable (usually)

    Is there a way to script a delete that does not write to the transaction log at all? That would be preferable for us.

    Nope. No such thing as an unlogged delete (or any other operation for that matter)

    If you could delete without logging it and the delete failed for any reason (server shutdown, foreign key violation, etc) your database would have to be marked suspect immediately as there would be no way to roll back the bits that had been done, no way to tell what had been done, no way to tell what state the DB was in. I'm sure you'll agree, that's not something you want.

    One more thing - could you please provide me sample script of a delete loop? Never done that before. Thanks again! Much appreciated...

    Err.. Very late. No promises at all that this will even work, but should give you right idea. Test In Dev First!

    Declare @Done bit

    SET @Done=0

    While (@Done=)

    begin

    DELETE TOP (10000) FROM .... -- that number can be tweaked.

    IF @@RowCount = 0

    SET @Done = 1

    CHECKPOINT

    END

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (7/7/2011)


    dso808 (7/7/2011)


    Hi Gail and thanks so much for your response. OK, the disk where our transaction log file is stored only has about 13GB of space available. Last time I ran the query, after it failed, I checked the disk space and it was down to 9.9MB available. So I guess that removing the restriction on the log size won't help in this case, correct?

    Err, no... 🙂

    UGH!!! Sorry Gail - messed up my question (in a most embarassing way). I meant, should I CHANGE the restriction on the log size? In other words, make the restriction on the log size something like 5000KB?

    Thanks for the script! I'll give it a try soon.

    One final question. Do you have any recommendations for an alternative approach to this? We just need to remove records older than 3 months from a MASSIVE table to keep the database (the data file) from growing so large that it shut down the whole instance (which it recently did - the data file was so large it ate up all the disk space). Something simple with the least impact. We're not concerned with backup issues and such for this database. Thanks!

  • dso808 (7/8/2011)


    UGH!!! Sorry Gail - messed up my question (in a most embarassing way). I meant, should I CHANGE the restriction on the log size? In other words, make the restriction on the log size something like 5000KB?

    No. That's likely to cause you more problems. We already know that you need more than 14GB of log in some cases, so restricting growth to a much smaller number is likely to cause you more problems, not less.

    Thanks for the script! I'll give it a try soon.

    One final question. Do you have any recommendations for an alternative approach to this? We just need to remove records older than 3 months from a MASSIVE table to keep the database (the data file) from growing so large that it shut down the whole instance (which it recently did - the data file was so large it ate up all the disk space). Something simple with the least impact. We're not concerned with backup issues and such for this database. Thanks!

    The script I posted is how it's generally done.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I'm definitely not going to argue with you Gail, but wouldn't horizontal partitioning with a Sliding Window be an option to consider?

    To be more specific:

    If the business case would allow to use horizontal partitioning (a monthly partitioned table) it would be possible to use the Sliding Window scenario. In order to delete data older than 3 month, the related partition could be switched into a staging table followed by a TRUNCATE StagingTable.

    I know, horizontal partitioning add a new level of complexity and might not be an option for the given scenario. But it might be used to "simulate a minimally logged partial DELETE". (I guess the term is technically wrong but that's what the effect looks like if you're looking from a distance)



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • LutzM (7/9/2011)


    I'm definitely not going to argue with you Gail, but wouldn't horizontal partitioning with a Sliding Window be an option to consider?

    Yes, the thing it it's Enterprise only and requires a fair bit of work to set up.

    It's an option provided the instance in question is Enterprise edition and the DB will never be restored to standard edition or lower and the OP is willing to put in the up-front design work.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gail - your script works like a charm. Thanks! Really appreciate all the help and I definately learned a lot more about transaction logs. Have a great week!

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply