Transaction Log grows to much / Recovery Model Simple

  • Hi,

    I'm running several SELECT .. INTO Statements, one after another, without an explicit transaction, but called from one SSIS-Package.

    The transaction log grows and grows but will not shrink automatically. This is a Problem because of hardware-limits.

    First I thought about setting the recovery model to BULK_LOGGED, but then i found out that it is already SIMPLE.

    1. How can I reduce the demand for logspace?

    2. If there is no better way, I would consider to place a DBCC Shrink('mylogfile', 1) between the statements. What do you think?

    select * into b from a

    DBCC Shrink('mylogfile', 1)

    select * into c from b

    Thanks,

    Tobias

  • Probably SSIS is running all of them in a transaction, which means that the log cannot truncate until the commit occurs. If that's the case, you won't be able to shrink, there'll be no space free. (Not that you should be shrinking a transaction log anyway)

    Check the transaction settings in SSIS.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Tobias,

    It might be that the SSIS package is running the whole thing in a transaction; however the SELECT...INTO will be bulk-logged if the database is set to Simple, so the transaction log should not b growing very quickly. The transaction log will also be truncated (as far as possible) every time CHECKPOINT runs in the database, or at worst, after two CHECKPOINTs.

    For general information on what may delay log truncation see http://msdn.microsoft.com/en-us/library/ms345414(SQL.90).aspx

    You can also run the following to see what is holding things up:

    SELECT name, recovery_model_desc, log_reuse_wait_desc

    FROM sys.databases

    WHERE name = 'your_db_name'

    Cheers,

    Paul

    edit: added link and code

    2nd edit: corrected info re: full backup being required

    3rd edit: removed duff info re: full backup being required!

  • no, there is no transaction. Meanwhile I learned to update the MinLSN in the transaction log using a checkpoint. That helped.

    But there is still one Statement which causes a large log. Is there a way to precalculate the log size for one Select * Into Statement, if I know the count of rows?

    And can it be that the log is also affected by the query plan?

  • Thanks Paul. I will try to make a full backup but currently I'm waiting for a rollback (transaction log could not grow..:w00t:)

    Is there a History where I can see, what recovery models were set in the past?

    Just to get a feeling:

    I speak about 10,000,000 rows:

    a few int, float and datetime columns

    Transaction Log grows up to 10 GB, then there is no more harddisk

  • Linchi Shea has a recent blog post titled "A Race Condition in the Simple Recovery Mode: Why is my log autogrowing? "

    http://sqlblog.com/blogs/linchi_shea/archive/2009/04/12/a-race-condition-in-the-simple-recovery-mode-why-is-my-log-autogrowing.aspx

    In summary, Linchi conclusion is that the transaction log file can grow and you could run out of disk space when your process is writing to the log faster than the checkpoint process can free the log space.

    SQL = Scarcely Qualifies as a Language

  • Paul White (5/5/2009)


    Tobias,

    It might be that the SSIS package is running the whole thing in a transaction; however the SELECT...INTO will be bulk-logged if the database is set to Simple, so the transaction log should not b growing very quickly. The transaction log will also be truncated (as far as possible) every time CHECKPOINT runs in the database, or at worst, after two CHECKPOINTs.

    Yeah, but the log can't be truncated past the beginning of the oldest open transaction, so if there's an open transaction the log will be growing. It could be there are other things running filling the log and an open transaction from SSIS preventing log truncation.

    The only case I can think of where this would not happen, would be if the database's recovery model was set to FULL at some point in the past, was changed to SIMPLE, but a full database backup has not occurred since that change.

    Once the database has been fully backed up once, things will go back to normal. This applies every time you change from FULL to SIMPLE.

    A backup is not needed to enforce the change from full/bulk-logged to simple. As soon as the DB is set to simple, the log goes immediately into auto-truncate mode and will be truncated on the next checkpoint.

    A backup is only needed to start the log chain when you go from simple to bulk-logged or simple to full.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • tobe_ha (5/5/2009)


    Is there a History where I can see, what recovery models were set in the past?

    It doesn't appear to be stored in the default trace (I just checked), so, unless you're keeping a history yourself, no.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • This was wrong so it was deleted to avoid confusing anyone else...

  • Paul White (5/5/2009)


    I'm pretty sure it is the case that a portion of the log will still be active if the database is switched from FULL to SIMPLE, if and only if, the database has never been backed up.

    If a database has never been backed up, it'll be in auto-truncate mode anyway, regardless of what recovery model it's actually in.

    The active portion of the log (log records for transactions that have not been committed and had the data pages hardened to disk) will never be truncated from the log. It's only the inactive log records (records for transactions committed and hardened to disk) that can be truncated, and that is regardless of recovery model.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I speak about 10,000,000 rows:

    a few int, float and datetime columns

    Transaction Log grows up to 10 GB, then there is no more harddisk

    Thank you, I already learned a lot today. My biggest problem still is the one Statement which is one implicit transaction. I wonder, if the size of the log is reasonable (see above)

    There is a clustered index as well.

  • Oh yeah that's it! I knew there was some funny behaviour around log backups if a full has never been taken. Sadly I remembered it exactly backwards - the database stays in auto-truncate mode until the first full backup - which makes sense because there is no base.

    Oh well - that must mean it's time for some sleep. After some more editing....

  • GilaMonster (5/5/2009)


    tobe_ha (5/5/2009)


    Is there a History where I can see, what recovery models were set in the past?

    It doesn't appear to be stored in the default trace (I just checked), so, unless you're keeping a history yourself, no.

    Correction: It is stored in the default trace, like all ALTER DATABASE commands, but not in a state that will tell you anything useful.

    See attached screenshot, that's me altering a database from simple to full and back to simple. 4 lines logged, no information about what alteration was done (the hidden columns are either login or servername info, or are blank for these rows)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (5/5/2009)


    tobe_ha (5/5/2009)


    Is there a History where I can see, what recovery models were set in the past?

    It doesn't appear to be stored in the default trace (I just checked), so, unless you're keeping a history yourself, no.

    It's logged to the SQL Server Error Log, and to the Windows Application Event Log.

    [font="Courier New"]Date 6/05/2009 12:30:38 a.m.

    Log SQL Server (Current - 6/05/2009 12:30:00 a.m.)

    Source spid53

    Message

    Setting database option RECOVERY to FULL for database Test.

    [/font]

    edit: added Windows Application Event Log.

  • tobe_ha (5/5/2009)


    My biggest problem still is the one Statement which is one implicit transaction. I wonder, if the size of the log is reasonable (see above)

    There is a clustered index as well.

    Hang on. Are you doing SELECT ... INTO (in which case the destination table doesn't exist prior to the execution) or INSERT INTO ... SELECT (where the destination table does)

    Where's the clustered index that you're talking about? Source or destination table?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply