Transaction log growing too large

  • Here is the what I am trying to do.

    We have a database with non-unicode data types for the columns.  Our latest version of the product now supports international languages. Hence, we need to convert existing customer databases to support this as well.  In the process of upgrading to the latest version, we are converting all the non-unicode data types columns to unicode data types columns to support international languages.

    While upgrading, the transaction log is growing terribly large and is running out of disk space, based on the data on the customer databases.  Can we change the database settings runtime, so that the upgrade will not log any operations to the transaction log.

    Though the transaction log is set on Truncate on Checkpoint, the transaction log is growing without truncating, I guess, because there is lot of active portion and hence cannot be truncated.

     

  • portion your job is an option.

  • I agree with wz700, try to convert table-by-table. What do you mean by the word  UPGRADE? Application upgrade? SQL version upgrade? What do the scripts do? Is it "Begin Transaction" statement at the beginning of the conversion script and "Commit Transaction" at the end?

    If it is not possible to divide the job in smaller parts then get a server with a BIG storage and upgrade customer databases on your hardware.

    Yelena

    Regards,Yelena Varsha

  • Depending on your recovery requirements, you could set your database to "Simple Recovery" temporarily while you run some of your conversions, then set it back to "Full Recovery".  Or you could do frequent transaction log backups.

  • "Simple Recovery" or "frequent transaction log backups" won't help if the conversion is one large transaction.

    Simple Recovery mode will clear the log once a checkpoint is reached, like a commit transaction. A transaction log backup only clears the inactive transaction records off the log and it doesn't shrink the file.

    Changing datatypes using an ALTER TABLE is going to update all the data in the table in one large transaction. I would create a second table that mirrors the first table but with the new datatypes. Then "move" records across in batches, eg: copy a few thousand records to the new table, then delete them from the old table. Rinse and repeat till all records are transferred. This will keep your transaction size down.

     

    --------------------
    Colt 45 - the original point and click interface

  • I like Phil's suggestion. You need to break the conversion into batches, which will not fill up the log in simple mode, or run a log backup or dump between batches.

  • If you use DTS to move data, you can move one large table, but specify batch size so that SQL will break into batches for you. Then you don't need to use any special coding or logic to create the batches.

  • Dear freind,

    use the following options and run them in Sql Query Analyzer

    in place 'pubs' use ur database name, it will set the database option of autoshrink on and also set the database on course to truncate on checkpoint. this truncate on check point only works when autoshrink option is set on.

    SP_DBOPTION 'pubs','trunc. log on chkpt.','on'

    sp_dboption 'pubs','autoshrink','on'

    DBCC SHRINKDATABASE ('pubs', 0,'TRUNCATEONLY')

    cheers

    leoaugust

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply