Puzzled by simple recovery model and transaction log

  • Guys,

    Please correct me if my understanding is wrong about the database using the simple recovery model.  Below is my understanding:

    If the database is using a simple recovery model, the transaction log is truncated each time a CHECKPOINT command is issued.  As far as I know, the system can issue the CHECKPOINT command at any time (whenever a certain criteria is satisfied).  This implies that backing up the transaction log of the database which is set to use the simple recovery model is pretty useless, no?

    If so, then why does SQL Server allow us to back up the transaction log (e.g. through a maintenance plan) for the databases using simple recovery model?  Why doesn't it let us know that it will not do us much good to do so?

    Thanks a lot

  • Hi there,

    Your BACKUP LOG statement will fail as you expect. Not quite sure why MSFT let you set up a maintenance plan which does this - but the backups will fail.

    Hope this helps,

    - James

    --
    James Moore
    Red Gate Software Ltd

  • Thanks James - you are correct - I actually learned it here:

    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=79030

  • You are almost right about transaction log behaviour in Simple mode.  You need to take into account the impact of logical log files (see BOL for details of these) as this can affect the amount of disk space in use.

    SQL keeps track of the log record serial number of the oldest open transaction.  The logical log file containing the oldest open transaction cannot be truncated, and neither can any logical log files relating to younger LRSNs, even if there are no active transactions using these logical log files.  This can result in an old but open transaction causing a large proportion of the physical log file(s) to be in use.

    When the oldest open transaction ends (commit or rollback), SQL notes which logical log file is now retained by the new oldest transaction.  Any logical log files between the current retention point and the previous point are now marked as eligable for truncation.  At the next checkpoint the log is truncated to a logical log file boundary and the newly-released logical log files can be reused.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Guys,

    Do we ever need to shrink the transaction log of a database following a simple recovery model?

    I am assuming that the transaction log can potentially grow a lot.  For example, if we perform a huge insert into a table or that would not be logged in this case?

     

    Please let me know whether I am correct about this?

     

    Thanks

  • The normal recommendation about shrinking any database file is: do NOT do it.

    It is only worth shrinking a database file if you expect the smaller size to remain long-term.  If the file is likely to expand within 3 months then it is best to not shrink it.

    Repeated shrinking and expansion of files will cause bad disk fragmentation and will harm performance.  Disk fragmentation is different to index fragmentation can only be cured by a disk defrag.  Many installations find it difficult to get a window to do disk defragmentation, so performance issues caused by this get left long-term.

    You should size your database files to cope with expected growth for the next year.  The growth increment should then be set as appropriate for the file size, and is best specified in MB so you have control over space usage.

    For SQL Server 2005, look at 'instant file initialization' in BOL, and set the appropriate Windows right to get this behaviour.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Thanks Ed.  I actually learned more about it and here is the summary:

    For a database following a simple recovery model the transaction log CAN physically grow - and although it will be truncated every time a CHECKPOINT command is executed, the file size will remain the same.

    If it grows due to a periodic maintenance work, which we would expect to happen again and again (e.g. reindexing job), then we should not shrink it, as this space would be needed by a transaction log again and shrinking it now will waste resources now and later - when it is to be extended again.

    If, however, it grows unexectedly, because we made a huge delete, which we would not expect to happen often, we might consider shrinking the transactional log file to decrease the hard disk space used - that decision has to come after weighing pros (hard disk space) vs cons (resource utilization during shrink) and making the proper decision!

  • sql_er,

    do not do a huge delete! If you expect to delete all rows in a table, use Truncate Table, it is not a logged operation. If you expect to delete many but not all rows, it may be better to limit the number of rows participating in transaction using Rowcount like in the posts:

    For Delete (see reply of Greg Johnson)

    http://qa.sqlservercentral.com/forums/shwmessage.aspx?forumid=8&messageid=152875&p=2

    For Update

    http://qa.sqlservercentral.com/forums/shwmessage.aspx?forumid=5&messageid=315644

     

    Regards,Yelena Varsha

  • The only time I usually shrink a database is when I've backed up a Prod DB (set to FULL recovery) and restored it down to a Dev box.  I shrink the DB first, then change it to SIMPLE recovery because I don't need to record the transactions I create in DEV.  I just need to have the data to make sure my "fixes" work correctly.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I know a good reason to shrink log files. Database was set up for full recovery, but no regular t-log backups were scheduled (engineer who set it up thought log was some sort of history, figured he didn't need to back it up). System on client site ran for several years (without any oversight from dba), filled 20 GB hard drive with t-log, crashed. That's when I was called in to look at it. It was a 10 GB t-log for a 1 GB databse! Yes, I shrunk that one! Then I sent an email to other engineers explaining care and feeding of transaction logs.

    This story is also a very good argument for having a dba at least tangentially responsible for every database server, no matter how small and simple it appears to be.

     

  • Stephanie,

    No kidding!  I had a 6 week contract once for a company who only had developers on staff.  And my very first job was creating translog backups.  They had started having hardware problems and finally figured out they were losing a LOT of transactions when they had to restore to a new server.

    Once businesses figure out the cost of not having a Trans Log, it's usually not to hard to convince them to keep a few around as pets. @=)

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply