Managing Transaction Logs

  • So the multiplexing of transaction log may decrease performance. I mean are the transactions spread across or they are written to a one log (for one transaction) and then to second log (for next transaction or may be another transaction running in parallel) ?

  • sameer.kasi200x (7/8/2010)


    So the multiplexing of transaction log may decrease performance.

    No. It won't have any effect on performance

    I mean are the transactions spread across or they are written to a one log (for one transaction) and then to second log (for next transaction or may be another transaction running in parallel) ?

    Neither.

    Say there are two transaction log files for a DB. SQL uses one, beginning to end, then uses the second one, beginning to end, then goes back to the first one and starts again from the beginning (assuming that portion of the log has been marked as reusable). SQL will not use two log files in parallel.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Am i right in thinking then that data doesn't "move" from the transaction log (ldf) to the data file (mdf). The database engine modifies data pages in the buffer, these changes are written in the log cache which must be recorded in the log file before the dirty pages can be flushed to the data file. Locking mechanisms are used to prevent the system users from reading uncommitted data from the database unless the system is specifically designed to allow this via specific isolation levels? Then once committed are the locks released?

    Apologies for asking what to many must be elementary questions.

    P.S. for those as ill-informed as myself, this SQL Server 2000 I/O basics article proved helpful. Old technology but i'm hoping the principals are similar.

    http://technet.microsoft.com/en-us/library/cc966500.aspx

  • Chris Houghton (7/10/2010)


    Am i right in thinking then that data doesn't "move" from the transaction log (ldf) to the data file (mdf). The database engine modifies data pages in the buffer, these changes are written in the log cache which must be recorded in the log file before the dirty pages can be flushed to the data file.

    Yup. Absolutely.

    The log records in the log cache will be written to disk before the data pages are or when the transaction is committed, whichever happens first. The data pages will be written to the data file at some point later.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (7/10/2010)


    Chris Houghton (7/10/2010)


    Am i right in thinking then that data doesn't "move" from the transaction log (ldf) to the data file (mdf). The database engine modifies data pages in the buffer, these changes are written in the log cache which must be recorded in the log file before the dirty pages can be flushed to the data file.

    Yup. Absolutely.

    The log records in the log cache will be written to disk before the data pages are or when the transaction is committed, whichever happens first. The data pages will be written to the data file at some point later.

    😀 Thank you, I think it's starting to sink in.

  • Gail, a minor typo:

    In full recovery model transaction log entries are kept for both database integrity and database recovery purposes. Inactive log records are retained in the transaction log until a lob backup occurs.

    Thanks for a great article,

    Rich

  • [font="Courier New"]This is a very good article, and most of the information in it transcends all versions of SQL Server, so the fact it is over 3 years old is moot. However, the statement "The growth increment must not be an exact multiple of 4GB. There's a bug in SQL if that exact size is used.", piqued my curiosity. To which version of SQL Server does THIS statement apply? Certainly not all versions! Wouldn't it have been fixed by now? And, what happens if one does specify a growth increment that is an exact multiple of 4GB? That would be very useful information. Does the growth increment limitation apply only to t-logs, or does it also affect database files.[/font]

  • Robert J. Browning (1/3/2012)


    This is a very good article, and most of the information in it transcends all versions of SQL Server, so the fact it is over 3 years old is moot.

    It's actually not 3 years old. This re-publication was a complete rewrite of the article. The original was a little too vague on many points and left too much out. So what you read is brand new (well, written late October last year)

    However, the statement "The growth increment must not be an exact multiple of 4GB. There's a bug in SQL if that exact size is used.", piqued my curiosity. To which version of SQL Server does THIS statement apply? Certainly not all versions!

    Yes, all current versions. (haven't checked if it's still there in 2012, and probably not the really old versions, ie pre SQL 7).

    http://www.sqlskills.com/BLOGS/PAUL/post/Bug-log-file-growth-broken-for-multiples-of-4GB.aspx

    That blog post is from May 2010.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I wrote a stored procedure that generates "restore script" by entering a date-time parameter. (We can also do this with LSN or transaction mark). I also have a version that can change location and database name to turn it into a QA/Test refresh script. I will publish them at SQL-Saturday#107. Oracle does that automatically. Oracle DBA hasn't need to be concerned which backup file to use. Restore command by datetime, SCN or restore point. Oracle has one leg up smarter.

    If you have every 15 minutes transaction log backup (correctly), your maximum data loss can only be up to 15 minutes.

    Re-writing is good. BOL is also re-written over years. Some previous ones may not have so much information.

    Be careful. She is black-belt. 😛

    Jason

    http://dbace.us

    Jason
    http://dbace.us
    😛

  • This article again shows SQL Server's shortcomings.

    1) default to full recovery mode (Oracle is opposite)

    2) not able to free unused space without side effecct. (Oracle does not have this problem)

    3) restore commands have to actually specify name of backup files (Oracle RMAN does not need to)

    4) select can block insert, update .... (not in Oracle)

    Despite, some people again and again preached "not to use existing features". None of the SQL Server professionals really try to convince Microsoft to make the product better. If I pay for space usage and I see space is not going to be reused, I will want to free up the unused space allocation and keep db running smoothly. I am not talking about auto-shrink, auto-grow jojo. I am talking about get the steady db to allocate adequate space (like 20% free for growth) so it works well until next maintenance window and you pay less (specially virtual environment).

    Jason

    http://dbace.us

    Jason
    http://dbace.us
    😛

  • Great Article Gail.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • The only time a log should be shrunk is if some abnormal database activity (or failed log backups) has resulted in the log growing far beyond the size it needs to be for the database activity. In this situation, the log can be shrunk as a once-off operation, reducing it back to the size that it was before the abnormal activity.

    I didn't see a recommendation for it, but what is the best method or T-SQL command to shrink the log back to its normal working size and keep the log file fragmentation to a minimum. Or, put another way, what is the best method to recreate a new log?

    Thanks,

    Bob

  • RLB (1/3/2012)


    The only time a log should be shrunk is if some abnormal database activity (or failed log backups) has resulted in the log growing far beyond the size it needs to be for the database activity. In this situation, the log can be shrunk as a once-off operation, reducing it back to the size that it was before the abnormal activity.

    I didn't see a recommendation for it, but what is the best method or T-SQL command to shrink the log back to its normal working size and keep the log file fragmentation to a minimum. Or, put another way, what is the best method to recreate a new log?

    It's in Kimberly's article that I referenced.

    Shrink log to 0, regrow (in maybe a couple of steps) to the desired size.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • jswong05 (1/3/2012)


    This article again shows SQL Server's shortcomings.

    1) default to full recovery mode (Oracle is opposite)

    Default is whatever model is set to. If you want new databases defaulting to simple recovery, set model to simple recovery

    2) not able to free unused space without side effecct. (Oracle does not have this problem)

    What exactly do you mean by side effect? Fragmentation of the data files?

    3) restore commands have to actually specify name of backup files (Oracle RMAN does not need to)

    So how do you chose which backup you want to restore? (especially if restoring a backup from another server)

    And there is a way in SQL to restore without specifying file names - backup devices. Also Management Studio can automatically pick up lists of backups if you select 'database' rather than 'device' and the backup history is intact. SQL 2012's even better in that regard

    4) select can block insert, update .... (not in Oracle)

    Read committed snapshot and snapshot isolation levels, introduced in SQL 2005. Readers no longer block writers and vis versa. It's your choice now whether you want locking or row versioning for isolation.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Can you recommend the good algorithm for the log to be shrunk as a once-off operation, reducing it back to the size that it was before the abnormal activity.

Viewing 15 posts - 91 through 105 (of 128 total)

You must be logged in to reply to this topic. Login to reply