Transaction log gets full while inserting records

  • Hi,

    I am using SQL Server 2005.

    I have table contains 3 Milions records. I am trying to insert those records into another table using statement like,

    INSERT INTO PLANTMATERIAL

    SELECT PLANT,MATERIAL FROM TB_MATERIAL_PLANTDEMO.

    While i am executing above statement, sql throw error like

    The transaction log for database ''ABC' is full.

    I cannot increse log file size more than 500 MB as per company policy. Also database Recovery model is Simple.

    Any one help me to solve this problem.

    Is there anything which instruct Insert statement to commit data after fix number of insert?

    Or any other solution to solve problem?

  • You need to break it down into batches, and run t-log backups between inserts. Is there a common primary key or unique between the two tables?

  • Hi Lynn,

    Thanks for your reply

    Combination of Plant + Material is a primary key of table. But table does not have any field which represent Row number. How can i break down insert statement without row number.

    Could you please tell me,What is "t-log backups"?

    Here i have only one insert statement. Can i use 't-log backups'?

    I don't have Admin rights of database. So if 'T-log backups' activity is related to Admin rights then i cannot do that.

  • At this point, then, I suggest you talk to you DBA's as it looks like you are going to need their assistance to accomplish this task since the transaction log can't be expanded beyond 500 MB (iirc).

    A t-log backup is just short for transaction log backup. Also, with what you have provided I really can't help much. We would need to see the DDL (create table statement) for the source table, some sample (in a readily consummable format to load the table) so we can see what we have to work with regarding the data.

    For more on my request above, please read the first article referenced below in my signature block.

  • Here is a way to batch insert records. It is deffinately slower to insert records in this manner, but it can give your transaction log a break. Make sure you are backing up your transaction log continually, otherwise the log file will continue to grow even while inserting iteratively.

    Here's the code:

    SET NOCOUNT ON

    DECLARE @LoopCounter INT

    DECLARE @LoopTotal INT

    DECLARE @PrimaryKeyID INT

    DECLARE @BatchSize INT

    DECLARE @NumberofBatches INT

    SET @BatchSize = 10000

    IF OBJECT_ID('tempdb..#PrimaryKeyTable') IS NOT NULL drop table [dbo].#PrimaryKeyTable

    CREATE TABLE #PrimaryKeyTable (ID INT IDENTITY(1,1), PrimaryKeyID INT, BatchID INT)

    insert into #PrimaryKeyTable (PrimaryKeyID) Select MyTablesPrimaryKey from MyFromTable

    update #PrimaryKeyTable set BatchID = (ID / @BatchSize)

    set @LoopCounter = 0

    set @NumberofBatches = (select max(BatchID) from #PrimaryKeyTable)

    while @LoopCounter <= @NumberofBatches

    BEGIN

    INSERT INTO MyInsertIntoTable

    SELECT MyRecords FROM MyFromTable

    INNER JOIN #PrimaryKeyTable on MyTablesPrimaryKey = PrimaryKeyID and BatchID = @LoopCounter

    set @LoopCounter = @LoopCounter + 1

    END

    Side note: I found better performance of the above with the temp table rather than a table variable.

  • Forgot to mentione, you will need to add the Plant and Plant Material columns in the #PrimaryKeyTable.

    Also, you may want to use a table variable instead of temp table as the use of table variables are not logged. Play around with it.

  • {soapbox engaged} Is that really company policy or policy put in place by the DBAs themselves. Because most companies don't have a policy that sets the largest size of a database log or the database as a whole. Now I can see some valid reasons for this because it makes you think harder about processes that might generate a lot of transaction space, but as a whole I think your DBAs need some more schooling.. That might be an unpleasant statement but an aweful lot of "policies" in companies are put forth by people who should know better. {soapbox disengaged}

    CEWII

  • Elliott W (7/20/2009)


    {soapbox engaged} Is that really company policy or policy put in place by the DBAs themselves. Because most companies don't have a policy that sets the largest size of a database log or the database as a whole. Now I can see some valid reasons for this because it makes you think harder about processes that might generate a lot of transaction space, but as a whole I think your DBAs need some more schooling.. That might be an unpleasant statement but an aweful lot of "policies" in companies are put forth by people who should know better. {soapbox disengaged}

    CEWII

    Without more information, it is possible that log space is restricted due to space limitations and to ensure that if one database has excessive growth that it doesn't impact other databases that may reside on the same server and disks.

  • Still 500MB is pretty measly, if you are generally running that tight on space you are seconds away from something more dire that low disk space.. Find something a new home...

    CEWII

  • Still don't have enough information to make such statements. Could also be possible that they are trying to force users to communicate with the DBA's when extraordinary data modifications are occuring, etc. We really should not make snap judgements about things we know little or nothing about.

    If the OP provides more information, then we can possibly provide more in return.

  • I appreciate all reply by all guys (Lynn, Elliott, and Carleton).

    I have resolve issue by Batch insert. Because database team not ready to extend size of log file.

  • Hi Carleton

    Thanks for reply,

    You wrote “Make sure you are backing up your transaction log continually, otherwise the log file will continue to grow even while inserting iteratively.”

    What means “Backing up your Transaction log”?

    Is it means “Trans” and “Commit”?

  • mayur.patel, there is a highly relevant point in your original post

    mayur.patel (7/19/2009)


    Also database Recovery model is Simple.

    If the recovery mode is simple you do not have to schedule transaction log backups, SQL will remove the active portion of the transaction log regularly.

    As long as you either insert in batches to break down the amount of work being done in one go (as Lynn suggests) or use a bulk insert, you should be OK

    Tim

    .

  • Hi Tim,

    Thanks for explaination.

    Now i am inserting data using batch.

  • Tim is correct that if your recovery mode is simple (which I believe is the default), you don't need to worry about clearing the log between transactions. If you used the code I suggested above without the BEGIN TRANSACTION statements, each iteration of the INSERT statement is its own transaction, so you should be set.

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply