sqlserver 2005 transaction log increase to 95 Gb

  • We insert into a 4 colomns table about 250 millions rows and during this time the transaction log increase to 95 Gb. Did somebody knows how to avoid this behaviour

     

  • If those inserts are incapsulated into one transaction, it's possible it will grow to 95GB.  If this is only a one time process, you can backup the transaction log and shrink the log file.  You can also change the recovery model to simple if you don't need to restore point in time.

  • dexter, it s indeed one transaction. Your suggestion to backup and shrink doesnt help so much the we want to avoid the transaction log growth before the 95 Gb.

    But what will happen to change the recovery to simple?

     

  • If you change to simple, it will only keep info for the open transactions.

    So if your load is in a single transaction, it will still grow to 95 Gb.

    Can you chop your load into pieces ?

    If yes, change to simple recovery and load in pieces ! your trxlog will grow up to the largest size of one single piece.

    When loading, load your data in clustering index order !

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • thanks for the answer Alzdba but one question what do you mean with: load your data in clustering index order and how to do it?

     

  • If you have a clustering index defined for your table, when you prepare your data for load, sort it according to your defined clustering index !

    This way you avoid page-splits !

    If you load using insert into ... select ... from ...  add an order by clause which matches the clustering index of the target table.

    If you use Bulk Insert, you can also add with (Order col1 asc/desc,.. )

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • i got it!

    thanks

  • If this is a full load/refresh or your applications are offline during this load, you might want to disable your Non clustering indexes ( if your data is thrustworthy ) and put enable them again after your load operation.

    from BOL :

    ALTER INDEX PK_Department_DepartmentID ON HumanResources.Department

    DISABLE ;

    -- Watch out for Foreign Keys ! (check BOL)

    load

    ALTER INDEX PK_Department_DepartmentID ON HumanResources.Department

    REBUILD ;

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • If your app is offline and you really do need ALL transactions to fail or succeed, why not

    a) Backup the DB before the data insert

    b) Perform the insert in small chunks - possibly in simple recovery mode to avoid the log growth as mentioned earlier

    c) Check for errors after each insert using the @@ERROR variable

    d) If an error is found, flag it and then just do a DB restore

    e) OR, if no error is found, presto - all complete.  Switch DB back to full recovery if required.

    That will still give you your "All or Nothing" approach but perhaps avoid the large log file growth.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply