Transaction Log grows to much / Recovery Model Simple

  • tobe_ha (5/5/2009)


    There is a clustered index as well.

    Eh? There's a clustered index? I thought you were doing SELECT...INTO - which would be bulk logged.

    INSERT INTO...SELECT into a table with a clustered index can still be minimally logged, unless it contained rows before the INSERT, in which case it will be fully logged regardless of the recovery setting.

  • Hang on. Are you doing SELECT ... INTO (in which case the destination table doesn't exist prior to the execution) or INSERT INTO ... SELECT (where the destination table does)

    Where's the clustered index that you're talking about? Source or destination table?

    Excuse me. It's a INSERT INTO .. SELECT

    The clustered index is on the target table.

  • tobe_ha (5/5/2009)


    Excuse me. It's a INSERT INTO .. SELECT

    The clustered index is on the target table.

    As I said, unless the table is empty before you start, INSERT...SELECT will be fully logged.

    Consider dropping the clustered index before you INSERT, and re-creating it afterwards.

    If that is not acceptable, consider inserting the new rows in batches, checkpointing, backing up the log (twice may be required) and checkpointing again.

    Alternatively, you may simply increase the size of the log!

    Paul

  • INSERT INTO...SELECT into a table with a clustered index can still be minimally logged, unless it contained rows before the INSERT, in which case it will be fully logged regardless of the recovery setting.

    Actually there will be rows in the target. For my current test the target is empty. But I don't know how to check, if the actual logging is minimal or standard.

    An idea is, if minimal logging does not work for targets already containing rows, i will try to fill a copy of the target and afterwards make a partition switch to the real target. But first i have to learn to determine, if minimal logging works.

  • tobe_ha (5/5/2009)


    INSERT INTO...SELECT into a table with a clustered index can still be minimally logged, unless it contained rows before the INSERT, in which case it will be fully logged regardless of the recovery setting.

    Actually there will be rows in the target. For my current test the target is empty. But I don't know how to check, if the actual logging is minimal or standard.

    An idea is, if minimal logging does not work for targets already containing rows, i will try to fill a copy of the target and afterwards make a partition switch to the real target. But first i have to learn to determine, if minimal logging works.

    If the real target is already partitioned, that is a good plan. If you are prepared to partition it, it is also a good plan.

    What do you mean "if minimal logging works"?

    See http://msdn.microsoft.com/en-us/library/ms190422.aspx for the prerequisites.

    Essentially, so long as the database is in BULK_LOGGED (preferred) or SIMPLE mode, you take a table lock on INSERT, and (in the simplest case) you are inserting into an empty heap (i.e. no clustered index) then it will be minimally logged - so long as you tick the Fast Load box in the SSIS task. A normal INSERT..SELECT can't be bulk logged in this way, unless you use SQL Server 2008, IIRC.

  • That link is for SQL 2008, this is the 2005 one:

    http://msdn.microsoft.com/en-us/library/ms190422(SQL.90).aspx

    INSERT...SELECT can be minimally logged in 2008 only - and only if the table is a heap (no clustered index).

    Paul

  • Hello,

    I would like to confirm that the method worked:

    prepare the data in a "stage" table. Doing this not in one transaction but splitting the data and iterate.

    If no error occured, do a partition switch to the target table. This way the transaction log can be hold on a constant level.

    It's a pitty that minimal logging did not work for me..

    Thanks to all.

    Tobias

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply