August 13, 2010 at 7:26 am
create archivedb..mysp_archive
@dtto datetime
as
SET @iRowsAffected =1
SET @iRows = 10000
WHILE @iRowsAffected <> 0
BEGIN
INSERT INTO a_tblOrders
SELECT TOP (@iRows) * FROM LiveDB..tblOrders WHERE [date] < @dtTo
DELETE LiveDB..tblOrders
FROM a_tblOrders as tArchive
WHERE tArchive.shopid = LiveDB..tblOrders.shopid
AND tArchive.orderID = LiveDB..tblOrders.orderID
SET @iRowsAffected = @@rowcount
select @iRowsAffected as 'rows'
checkpoint
EXEC('USE LiveDB; CHECKPOINT')
END
Hi
I'm going to archive lots of old orders to a new db. May run out of disk space soon and want to have a seprate DB with very old orders on another DB server.
So moving data cannot increase the logfile too much..
The SP is run on the new DB. Both dbs are simple recovery models both truncate log on checkpoints are ON.
If I did the insert into new table and delete from old table in one go - it would fill the log file - I've tried to break it down into several smaller moves of 10000 orders
I expected the log file to grow on the first iterations but once it had the space it would reuse the space as data is written to disk.
What I'm seeing is the logfile of the liveDB (source of data) continiously grow in equal increments. The final size of the logfile doesn't decrease once SP is finished.
I have to issue a DBCC SHRINKFILE (livedb);
August 13, 2010 at 7:50 am
Try building in a small delay between iterations - two or three seconds, maybe. This may allow time for the transaction log to clear before the next iteration. I would also advise wrapping the INSERT and DELETE in a transaction. If the delete fails, you want the insert to roll back.
The size of the log will not decrease at the end of processing, unless you set the auto shrink option on (not recommended). You should only shrink it if you are confident it isn't going to grow to that size again in the normal course of events.
John
August 13, 2010 at 8:35 am
Thanks will do transaction..
August 17, 2010 at 10:01 pm
hi,
after the transaction, issue a checkpoint and wait for a minute or 2 and then start the second batch.
Regards
August 17, 2010 at 11:17 pm
What recovery model?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 19, 2010 at 3:13 pm
Both DBs simple recovery model
August 19, 2010 at 3:48 pm
will not be able to post for next 3 days - Not been rude 🙂
August 19, 2010 at 3:58 pm
Just stick a Ckeckpoint in between the iterations. No need to put a delay.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply