Transaction Log

Stairway to Transaction Log Management

Stairway to Transaction Log Management in SQL Server, Level 9: Monitoring the Transaction Log

  • Stairway Step

Our major goal in terms of log maintenance for all databases under our care is to optimize for write performance, in order to support all activities that require SQL Server to write to the log, including data modifications, data loads, index rebuilds, and so on. However, it's also important to keep an eye on possible log fragmentation, which, as described previously, can affect the performance of processes that need to read the log, such as log backups and the crash recovery process.

5 (1)

You rated this post out of 5. Change rating

2013-04-24

7,164 reads

SQL Server Transaction Log Management eBook Download

SQL Server Transaction Log Management by Tony Davis and Gail Shaw

  • Book

When a SQL Server database is operating smoothly and performing well, there is no need to be particularly aware of the transaction log, beyond ensuring that every database has an appropriate backup regime and restore plan in place. When things go wrong, however, a DBA's reputation depends on a deeper understanding of the transaction log, both what it does, and how it works. An effective response to a crisis requires rapid decisions based on understanding its role in ensuring data integrity.

You rated this post out of 5. Change rating

2012-11-12

4,328 reads

Stairway to Transaction Log Management

Stairway to Transaction Log Management in SQL Server, Level 6: Managing the Log in BULK_LOGGED Recovery Model

  • Stairway Step

A DBA may consider switching a database to the BULK_LOGGED recovery model in the short term during, for example, bulk load operations. When a database is operating in the BULK_LOGGED model these, and a few other operations such as index rebuilds, can be minimally logged and will therefore use much less space in the log

5 (2)

You rated this post out of 5. Change rating

2012-11-07

5,061 reads

External Article

SQL Server Transaction Log Fragmentation: a Primer

  • Article

Generally, you will have no need to worry about the number of virtual log files in your transaction log. However, if you use the default settings for 'auto-grow', you can end up with such 'fragmentation' in your transaction log as to affect performance noticably. How can this be avoided? How can you tell it's a problem? What do you do about it? Greg explains.

2012-11-23 (first published: )

7,126 reads

Blogs

Small Data SF 2024

By

I can’t remember how I heard about Small Data SF 2024, but it caught...

A New Word: Moledro

By

moledro – n. a feeling of resonant connection with an author or artist you’ll...

Snowflake + Azure blob

By

Let’s go back to data platforms today and I want to talk about a...

Read the latest Blogs

Forums

7 sept, scheduled book

By philip.scott

Comments posted to this topic are about the item 7 sept, scheduled book

7 sept, schedlued article

By philip.scott

Comments posted to this topic are about the item 7 sept, schedlued article

6 sept, published book

By philip.scott

Comments posted to this topic are about the item 6 sept, published book

Visit the forum

Question of the Day

Azure Data Lake Storage Gen 2

Azure Data Lake Storage Gen 2 is built on ...?

See possible answers