Transaction Log Activity

  • Our organization takes daily full backups, and hourly t-log backups.

    It's not uncommon to see these size characteristics of some of our bigger DBs:

    Data file: 15GB

    T-log file: 25GB, 99.67% free

    They seem to use it during the weekly index maintenance.

    Is this unusual? I'm getting heat from the SAN team...They are salivating over the whitespace. Any way to deduce what is using the space on the weekends?

    Thanks!

  • Easy, don't blindly rebuild / reorg every single darn index in the DB (it's what is done in the maintenance plans).

    This is what I use :

    http://sqlfool.com/2010/04/index-defrag-script-v4-0/

    Over time I was able to cut down the logs use by 99%.

    My new metric for the log size is to do 2.5X the biggest table in the system. So I went from 20 GB to 5 GB... nice saving. Way more compounded by the fact that you don't have an extra 20GB t-log backup laying around every week.

  • We use something similar...It's not blindly rebuilding them.

  • Here's my full thoughts process to get there.

    http://qa.sqlservercentral.com/Forums/Topic1126147-146-1.aspx

  • Grubb (7/27/2011)


    We use something similar...It's not blindly rebuilding them.

    This should give you the max tlog backup size per hour per day (or whatever your schedule is) for the last month.

    What's the max value you get?

    Do you have a logging history to see what indexes get defragmented on each run?

    Any big ETL process that loads / delete / updates a ton of data during the weekend?

    SELECT

    bs.database_name

    -- , DATEADD(D , 0 , DATEDIFF(D , 0 , bs.backup_start_date)) AS BackupDate

    , CONVERT(VARCHAR(50), DATEADD(s , -1 * DATEPART(s , bs.backup_start_date), bs.backup_start_date), 108) AS BackupDate_minutes

    -- , bs.backup_start_date

    -- , CONVERT(DECIMAL(18 , 1) , DATEDIFF(s , bs.backup_start_date ,

    -- bs.backup_finish_date)

    -- / 60.0) AS MinutesForBackup

    , SUM(CONVERT(DECIMAL(18 , 3) , bs.backup_size / 1024 / 1024 / 1024)) AS Total_GB_backup_size

    , COUNT(*) As Cnt

    , AVG(CONVERT(DECIMAL(18 , 3) , bs.backup_size / 1024 / 1024)) AS Average

    , MAX(CONVERT(DECIMAL(18 , 3) , bs.backup_size / 1024 / 1024)) AS Maximum

    -- , ROW_NUMBER() OVER ( PARTITION BY bs.database_name ORDER BY bs.backup_start_date ) AS seqFirst

    -- , ROW_NUMBER() OVER ( PARTITION BY bs.database_name ORDER BY bs.backup_start_date DESC ) AS seqLast

    -- , bs.[type]

    FROM

    msdb.dbo.backupset bs

    WHERE

    bs.[type] = 'L'

    -- AND name IS NULL

    AND bs.backup_start_date >= '2011-06-24'

    -- AND bs.backup_size > 1024 * 1024 * 25 --min size in mbs

    -- AND DATEPART(n, bs.backup_start_date) = 15

    GROUP BY bs.database_name, CONVERT(VARCHAR(50), DATEADD(s , -1 * DATEPART(s , bs.backup_start_date), bs.backup_start_date), 108)

    -- HAVING AVG(CONVERT(DECIMAL(18 , 3) , bs.backup_size / 1024 / 1024)) > 25

    ORDER BY CONVERT(VARCHAR(50), DATEADD(s , -1 * DATEPART(s , bs.backup_start_date), bs.backup_start_date), 108)

    -- ORDER BY bs.backup_start_date

  • Thanks for the suggestions...

    So, it's not unusual, then, to see such log usage during index rebuilds, based on the size of the large tables.....

  • Grubb (7/27/2011)


    Thanks for the suggestions...

    So, it's not unusual, then, to see such log usage during index rebuilds, based on the size of the large tables.....

    If you tell me that your db only has 1 table with 15 GB data in it then yes it's normal.

    I still think there's something fishy going on. Maybe your filters to select the candidates are too severe (like 5% fragmentation).

  • Yes, the low filter is 5%....Probably a bit low, eh? Perhaps 10%...

    So, the difference between 5 and 10 is enough to really bloat it?

    K, so this was a general discussion. I do have logs, and I can tell you the max t-log sizes, but for that, I'll have to zero on a particular database. I think I'll do that...Maybe we can solve this. I'd like to be on good terms with the DASD guys, but not at the expense of performance.

    Thanks, Ninja!

  • Grubb (7/28/2011)


    Yes, the low filter is 5%....Probably a bit low, eh? Perhaps 10%...

    So, the difference between 5 and 10 is enough to really bloat it?

    K, so this was a general discussion. I do have logs, and I can tell you the max t-log sizes, but for that, I'll have to zero on a particular database. I think I'll do that...Maybe we can solve this. I'd like to be on good terms with the DASD guys, but not at the expense of performance.

    Thanks, Ninja!

    It's not the filter itself, it's what it works on. If the FF factor is wrong on your top 5-10 tables they will be rebuild / reorg EVERY TIME.

    Most of my tables make it weeks, months (if ever) in between maintenance.

    I use those settings 10% min fragmentation, 1000 pages minimum >>> both of those could be setup a little higher (confirm with a load test)

    Please read the thread I linked to for the whole process I went thru.

  • I'll re-read it today, with this information in mind....

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply