DTS - Table Lock with Use Fast Load

  • I've read in many places that it is usually best to use the "Table lock" option with the "Use Fast Load" option.  My question is this:  which table gets locked - the source or the destination table?  None of the documentation seems to mention this, so I guess it is just assumed it would be the source table - but I've learned in this business to NEVER assume!

    I am pulling data from the Data Warehouse and do not want to Lock the source table.  The destination table I would definitely want to lock if this helps performance and reduces the logging.  Thanks for your help!  

  • The table lock option controls whether row locking or table locking is used for the destination table.

    If the source is a table (rather than a text file, spreadsheet, etc.), it will also be locked using shared read locks.  These may be row locks, but they may escalate to page or table locks.  This is normal behavior that is beyond your control.

    So bad assumption #1 was assuming the table lock option referred to the source table when the other options on that tab refer to the destination table (as spelled out in BOL).  Bad assumption #2 is assuming you don't want to lock the source table.  Not all locks are exclusive, you usually want a shared lock on the source to avoid updates or deletes to the data you're reading.  There are also schema locks that prevent the table from being dropped while you read it.

  • Thanks for the input Scott.  So if I don't have the "Table lock" option checked, would this cause each insert to be logged?  This is what I am really trying to avoid...

  • The inserts will be logged regardless of whether row locking or table locking is used.

    If this is a serious problem, you might consider these approaches:

    Dump the data warehouse records to a text file, then use a bulk insert task to load them into a database with simple or bulk-logged recovery mode.  Put the dump file on the destination server and run the DTS package on the destination server to minimize network traffic and maximize import performance.  Bulk insert operations minimize logging if several conditions are met (see BOL).

    Change the destination database to simple recovery model (if it wasn't already), then use a data pump task with a fixed batch size to commit every n rows.  The best value for n depends on your system, and must be determined by experiment.  This will avoid having the entire load pile up in the transaction log.  It doesn't change the amount of data written to the log, but it doesn't require the entire insert to be active in the log at one time.  If the database was not in simple recovery originally, you will have to change back to the original recovery model and perform a full backup after the load is finished.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply