Temp DB Transaction log

  • Hi,

    I have stored procedure that process approximately 8L input records and is scheduled to run every day at 8pm. There are lot of temp tables that are created while the execution of the procedure.During this period the Temp db log file increases to about 42GB and once the procedure completes, the temp db log file doesn't shrink. We need to shrink the log file manually everyday. We have scheduled a job to shrink the file at around 9pm every day.Is there a way to shrink the size of the temp log file once the procedure is completed? or creating the scheduled job is the only option for dealing such scenario.

    Further, can you explain we why the size of the temp db increase. My understanding of the temp db is that it is used for temporary processing of the data. Is it not that temp db size should be constant also after the execution of SP.

    For instance, i have only one procedure that is running on the machine which is processing large amount of data. Before the execution of the sp, let the data file size be 1MB. During the execution of the SP, the procedure creates a large number of temp table for processing of the data. Once the procedure is completed, all the temp table gets drop. Now, dont you think the data file should be now as before the execution of the procedure i,e 1MB. Kindly correct me if my understanding is wrong about temp table.

    Regards,

  • If you are creating temp tables and loading large amounts of data into them that activity into tempdb has to be logged, hence the growth in the tempdb transaction log. Even though the tempdb is in simple mode transaction activity will still be logged, and the log will have to be big enough to support the largest transaction.

    truncating a log will not cause it to shrink, hence why you are having to do it manually via dbcc shrinkfile.

    Two things - see if you can batch up the tempdb activity so the tempdb log can be truncated more often and won't have to grow so much.

    secondly and more important - stop shrinking the tempdb log file. You are using resources to do that, and then more resources having to grow tempdb log file again when the stored procedure next runs. This will be slowing up your process and fragmenting the log on disk, also hurting performance.

    Next time you have the log small or SQL is bounced, manually grow the log (and data) file in one chunk to the size it needs to be to support this processing and leave it there. also amend the growth factors to a sensible value (not 1Mb or 10%!)

    ---------------------------------------------------------------------

  • We need to shrink the log file manually everyday

    I agree with George. Stop doing that. The log file will just grow again and you'll end up in nothing more than a perpetual tug of war.

    The real problem is in the process that handles the 8Lak rows per day... which is actually quite small as process files go. The process is grossly ineffecient and may even contain an accidental cross join which is why TempDB is growing to a whopping big 42GB. The only answer to this problem is to correctly identify the problem (The actual execution plan will help a lot in this area) in the code and fix it. I'll also state that, once you find the section of code that causes the explosive growth, it may be more cost effective to figure out what it's supposed to do and rewrite it instead of trying to salvage the code.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • third vote for George here.

    If it grows every day, and you shrink it every day to have it grow again, does that not seem silly? What's the point?

  • Thanks for your suggestion. Even i had the same concern about shrinking of the files. If possible, can you let us know in which scenario the log file can grow to such an extends.

    I am in the process of analysing the code and will share my finding with all... Thanks in adance for your suggestions.

  • Anything that makes use of tempdb, especially large transactions that have sorts, index rebuilds set to use tempdb, etc. will cause transactions in tempdb. It is the same as what causes a transaction in any database.

    I would suggest a trace that analyzes the activity when you see growth and look at that code.

  • naveenreddy.84 (12/13/2009)


    Thanks for your suggestion. Even i had the same concern about shrinking of the files. If possible, can you let us know in which scenario the log file can grow to such an extends.

    I am in the process of analysing the code and will share my finding with all... Thanks in adance for your suggestions.

    I already did, actually... an accidental cross join is usually the culprit. Another name for an accidental cross join is a "many to many join". They're ususally caused by a simple misunderstanding of what a table contains and whether or not the data being joined on is unique or not.

    A typical example comes from my last job... they had a 7Lak table and a 70Lak table and they joined on columns that did not have unique values in either table. I stopped the run after TempDB hit 55Gb. The estimated execution plan showed the number of internal rows being spawned as being over 490,000Lak. Now take a guess where such rows are spawned as work tables even if you have no Temp tables...

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • an accidental cross join is usually the culprit

    thats right.. even i had faced a similar issue wherein a large cross join was the troublemaker.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply