Transaction log size with moving large amounts of data

  • create archivedb..mysp_archive

    @dtto datetime

    as

    SET @iRowsAffected =1

    SET @iRows = 10000

    WHILE @iRowsAffected <> 0

    BEGIN

    INSERT INTO a_tblOrders

    SELECT TOP (@iRows) * FROM LiveDB..tblOrders WHERE [date] < @dtTo

    DELETE LiveDB..tblOrders

    FROM a_tblOrders as tArchive

    WHERE tArchive.shopid = LiveDB..tblOrders.shopid

    AND tArchive.orderID = LiveDB..tblOrders.orderID

    SET @iRowsAffected = @@rowcount

    select @iRowsAffected as 'rows'

    checkpoint

    EXEC('USE LiveDB; CHECKPOINT')

    END

    Hi

    I'm going to archive lots of old orders to a new db. May run out of disk space soon and want to have a seprate DB with very old orders on another DB server.

    So moving data cannot increase the logfile too much..

    The SP is run on the new DB. Both dbs are simple recovery models both truncate log on checkpoints are ON.

    If I did the insert into new table and delete from old table in one go - it would fill the log file - I've tried to break it down into several smaller moves of 10000 orders

    I expected the log file to grow on the first iterations but once it had the space it would reuse the space as data is written to disk.

    What I'm seeing is the logfile of the liveDB (source of data) continiously grow in equal increments. The final size of the logfile doesn't decrease once SP is finished.

    I have to issue a DBCC SHRINKFILE (livedb);

  • Try building in a small delay between iterations - two or three seconds, maybe. This may allow time for the transaction log to clear before the next iteration. I would also advise wrapping the INSERT and DELETE in a transaction. If the delete fails, you want the insert to roll back.

    The size of the log will not decrease at the end of processing, unless you set the auto shrink option on (not recommended). You should only shrink it if you are confident it isn't going to grow to that size again in the normal course of events.

    John

  • Thanks will do transaction..

  • hi,

    after the transaction, issue a checkpoint and wait for a minute or 2 and then start the second batch.

    Regards

  • What recovery model?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Both DBs simple recovery model

  • will not be able to post for next 3 days - Not been rude 🙂

  • Just stick a Ckeckpoint in between the iterations. No need to put a delay.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply