TRN Log File Size Assistance

  • Hi everyone. I have a problem and am not sure what the best coarse of action is to take. I understand that the size settings of the log file are configuration and utilization specific, but I need a decent jumping off point.

    Initially, we had a simple recovery mode set up for our qa database. This was before I came to the group. I was given DBA responsibilities and from there, moved all of our mission critical databases to a full recovery mode with hourly trn backups. This has been our process for the last 6 or 7 months.

    Our mdf file is currently 53GB and our log file is 115GB. I know this can't be right. I checked the forums here and found a recommendation from Gail Shaw to run DBCC SQLPERF(LogSpace) for someone having a shrink file issue, so I ran this as well. I'm not sure if I should shrink the file or not, because I seem to remember an article or post indicating that shrinking the trn file is a bad thing to do. Can someone confirm (or not) this for me?

    My results are:

    Database Name Log Size (MB) Log Space Used (%) Status

    master 21.61719 30.84568 0

    tempdb 600.4922 36.57912 0

    model 0.4921875 76.19048 0

    msdb 19.61719 31.34209 0

    qa 112358.2 0.1941266 0

    performance 2909.117 1.228996 0

    Queue 1004.367 2.415039 0

    Metrics 3935.18 0.9837766 0

    RMS 8500.68 6.151278 0

    Monitor 508.3047 2.417753 0

    Release 19.67969 23.04734 0

    Maintenance 2480.992 6.195976 0

    MemCheck 15.99219 21.5651 0

    The qa database is the problem right now, but most of my other utilization percentages appear to be awfully low. Should I be making a habit of shrinking the log file after I back it up?

    Also, the initial settings for the mdf and ldf files are set to the current sizes (53GB for mdf and 115GB for ldf, but I am sure they didn't start out this way). Should these settings be changing on their own?

    Any assistance would be greatly appreciated. Please let me know if I should provide any other info.

    Cathy

    **update: The files are set to autogrow by a specified amount, not by a percentage. mdf by 5GB and ldf by 2GB.

  • Cathy DePaolo (7/15/2009)


    I'm not sure if I should shrink the file or not, because I seem to remember an article or post indicating that shrinking the trn file is a bad thing to do. Can someone confirm (or not) this for me?

    You can definitely shrink the log file no harm in it but what the articles might have pointed to is that you should not shrink it as a regularly or as a job which will increase the file level fragmentation.

    But I am a little bit confused here? you said you have been having this database set to Full recovery model form the past 6 or 7 months so, does that mean you have not been taking backups regularly?

    My results are:

    Database Name Log Size (MB) Log Space Used (%) Status

    qa 112358.2 0.1941266 0

    The qa database is the problem right now, but most of my other utilization percentages appear to be awfully low.

    only minute part of the file is used.

    Should I be making a habit of shrinking the log file after I back it up?

    No, you do not have to shrink the log file. When you take the log backup it should made reusable and it will create space within itself so, there is no point in shrinking the log file

    Also, the initial settings for the mdf and ldf files are set to the current sizes (53GB for mdf and 115GB for ldf, but I am sure they didn't start out this way). Should these settings be changing on their own?

    Sorry,Not so sure what you meant here?

    **update: The files are set to autogrow by a specified amount, not by a percentage. mdf by 5GB and ldf by 2GB.

    OK! , that's a good way to go but how frequent your files are growing?

  • Hi Krishna. Thank you for your reply.

    Krishna Potlakayala (7/15/2009)


    Cathy DePaolo (7/15/2009)


    I'm not sure if I should shrink the file or not, because I seem to remember an article or post indicating that shrinking the trn file is a bad thing to do. Can someone confirm (or not) this for me?

    You can definitely shrink the log file no harm in it but what the articles might have pointed to is that you should not shrink it as a regularly or as a job which will increase the file level fragmentation.

    But I am a little bit confused here? you said you have been having this database set to Full recovery model form the past 6 or 7 months so, does that mean you have not been taking backups regularly?

    I perform a full backup every evening, followed by a trn backup up every hour. Previously, the group was only backing up the database once a day, in the evening and I believe they were actually truncating the log. They weren't even really in the simple recovery mode, but should have been because they were always having log issues and were not aware of the backup process for trn files.

    My results are:

    Database Name Log Size (MB) Log Space Used (%) Status

    qa 112358.2 0.1941266 0

    The qa database is the problem right now, but most of my other utilization percentages appear to be awfully low.

    Should I be making a habit of shrinking the log file after I back it up?

    No, you do not have to shrink the log file. When you take the log backup it should made reusable and it will create space within itself so, there is no point in shrinking the log file

    Also, the initial settings for the mdf and ldf files are set to the current sizes (53GB for mdf and 115GB for ldf, but I am sure they didn't start out this way). Should these settings be changing on their own?

    Sorry,Not so sure what you meant here?

    In the database properties settings, the initial size used to be around 30GB for the mdf and the ldf if my memory serves correctly. Now, they have been bumped up to the current sizes. At this point, the initial settings are not accurate because the database did not start out with such a large mdf or ldf file. Does that explain this better?

    **update: The files are set to autogrow by a specified amount, not by a percentage. mdf by 5GB and ldf by 2GB.

    OK! , that's a good way to go but how frequent your files are growing?

    I have only been tracking growth for two months unfortunately, but I can give you the data that I have. On June 12th, the mdf was 52365 MB as well as on July 15th. On June 12th, the ldf was 36358 MB, but on July 15th, it jumped to 100358 MB. Today, the ldf up over 115GB.

  • - You can expect the log file to be as big as the data file, depending on how "intelligent" your perform index rebuilds.

    - If someone performed massive updates, e.g. implemented a db model upgrade, your log may actually grow. If the upgrade was still ongoing during your logbackup, chances are your open tran would still have needed extra log file space.

    - code loops (yes they occur) may cause this log issue too.

    Best is to figure out what actually happened when it grew so huge.

    Should you shrink ?

    That depends if you can find the cause of the grow and if that cause is not to be expected within a reasonable time.

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Since it is a QA box why don't you set the recovery model to simple and back it up nightly. We only backup the system databases on our QA boxes because the data gets restored from Production when needed. All the QA databases are set to Simple recovery.

    We do not shrink our log files unless we know that an unusual data event has occurred. The thought is the file is the size it is because it needs to be that large to handle the normal workload.

  • I'd second Lee that a test/QA database likely will be fine in simple mode.

  • Cathy DePaolo (7/15/2009)


    Hi Krishna. Thank you for your reply.

    Krishna Potlakayala (7/15/2009)


    Cathy DePaolo (7/15/2009)


    I'm not sure if I should shrink the file or not, because I seem to remember an article or post indicating that shrinking the trn file is a bad thing to do. Can someone confirm (or not) this for me?

    You can definitely shrink the log file no harm in it but what the articles might have pointed to is that you should not shrink it as a regularly or as a job which will increase the file level fragmentation.

    But I am a little bit confused here? you said you have been having this database set to Full recovery model form the past 6 or 7 months so, does that mean you have not been taking backups regularly?

    I perform a full backup every evening, followed by a trn backup up every hour. Previously, the group was only backing up the database once a day, in the evening and I believe they were actually truncating the log. They weren't even really in the simple recovery mode, but should have been because they were always having log issues and were not aware of the backup process for trn files.

    Are there any open transactions against this database? what operations do you perform against the same? If you are taking a log backup every hour, the log file should not be this huge. There is some usual/unusual operation that could be the cause of such a huge log file.

    Also, the initial settings for the mdf and ldf files are set to the current sizes (53GB for mdf and 115GB for ldf, but I am sure they didn't start out this way). Should these settings be changing on their own?

    Sorry,Not so sure what you meant here?

    In the database properties settings, the initial size used to be around 30GB for the mdf and the ldf if my memory serves correctly. Now, they have been bumped up to the current sizes. At this point, the initial settings are not accurate because the database did not start out with such a large mdf or ldf file. Does that explain this better?

    Yes, that's the size of the files and the initial size increases proportionately with the autogrowth.

    **update: The files are set to autogrow by a specified amount, not by a percentage. mdf by 5GB and ldf by 2GB.

    OK! , that's a good way to go but how frequent your files are growing?

    I have only been tracking growth for two months unfortunately, but I can give you the data that I have. On June 12th, the mdf was 52365 MB as well as on July 15th. On June 12th, the ldf was 36358 MB, but on July 15th, it jumped to 100358 MB. Today, the ldf up over 115GB.

    These are small figures. It always a better idea to track your database growth and set the autogrowth accordingly.

    But, why is a qa database critical to your environment? Usually these databases are put on simple and at the end of each night or whenever the need comes a fresh copy of the backup is restored.

  • Thank you everyone for your responses. My answers to your questions are below:

    Type of database:

    The database is named qa, but it's a production database on a production box.

    Indexing

    My index rebuilds are only performed when the fragmentation is greater than 40%, otherwise, I perform a reorg. I then have a job that will run sp_updatestats after the index rebuild/reorg procedure. Finally, I just added a new job that will run update stats WITH FULLSCAN every evening.

    Types of transactions

    The db model never changes. We have many updates/deletes/inserts occurring throughout the day. There are no open transactions upon running DBCC opentran('qa').

  • I just wanted to post an update. I did shrink the log file and it is now around 48GB. I'll go through our other databases and do the same thing now.

    I was also only tracking db growth on a monthly basis. My mind set was focused on what we were using the database for, and not on maintaining the database. I will bump that up to daily so I will be able to track the growth to a particular day. If anyone has any recommendations with regard to managing the size of the files, I would love to hear them, especially with how they relate to the results of DBCC SQLPERF(LogSpace). Perhaps there is a certain percentage of utilization that indicates a shrink is required? Again, any tidbits here would be of tremendous help.

    Thank you again for the help.

    Cathy

  • Cathy,

    DBCC SQLPERF(logspace) will give the percentage of the log file currently occupied with log information. The percentage value that you may see for a particular database will depend on the levely of activity and the transaction log backup period. There is no threshold value that would indicate a shrink is needed.

    A general rule for the ldf file size is that it should\can be around 1/4th the size of the mdf file. Please note that this may not hold valid in some situations. In your case, as the mdf file is 53 GB the ldf file can be sized to around 12 GB.

    You may leave the autogrow option enabled with filegrowth in MB. Please select the file growth size carefully, so that you don't end up with too many virtual log files in the lfd.

  • Thank you ravivarma. Do you know why a standard DBCC SHRINKFILE on the log file would have resulted in 48GB if the file could safely be 12GB? I did not set a target size because I was concerned I would not set a value high enough for transactional processing and thus resulting in a severe performance hit.

    What are the scenarios that I am looking at if I shrink the file again, only this time, setting the target size to 12GB? In other words, what are the negatives to doing this and what risks am I taking? Am I better off leaving the log at the size it is now or should I really try to keep the size down to 1/4 of the mdf unless proven otherwise by activity on the database?

    Cathy

  • shrinking a log file will not move log data !

    A log file can only be shrunk to the last active virtual log !

    Hence, you may have to execute the shrink procedure more than once !

    e.g.

    --INF: How to Shrink the SQL Server 7.0 Transaction Log

    -- SQL7 http://support.microsoft.com/support/kb/articles/q256/6/50.asp?id=256650&;SD

    -- SQL7 http://www.support.microsoft.com/kb/256650 ;

    -- SQL2000 http://support.microsoft.com/kb/272318/en-us

    -- SQL2005 http://support.microsoft.com/kb/907511/en-us

    -- select db_name()

    -- select * from sysfiles

    -- THIS SCRIPT IS NOT INTENDED FOR SCHEDULED USAGE !! READ BOL and the urls !!

    SET NOCOUNT ON

    DECLARE @LogicalFileName sysname,

    @MaxMinutes INT,

    @NewSize INT

    -- *** MAKE SURE TO CHANGE THE NEXT 3 LINES WITH YOUR CRITERIA. ***

    SELECT @LogicalFileName = 'logicalname', -- Use sp_helpfile to identify the logical file name that you want to shrink.

    @MaxMinutes = 10, -- Limit on time allowed to wrap log.

    @NewSize = 100 -- in MB

    -- Setup / initialize

    DECLARE @OriginalSize int

    SELECT @OriginalSize = size -- in 8K pages

    FROM sysfiles

    WHERE name = @LogicalFileName

    SELECT 'Original Size of ' + db_name() + ' LOG is ' +

    CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +

    CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'

    FROM sysfiles

    WHERE name = @LogicalFileName

    CREATE TABLE DummyTrans

    (DummyColumn char (8000) not null)

    -- Wrap log and truncate it.

    DECLARE @Counter INT,

    @StartTime DATETIME,

    @TruncLog VARCHAR(255)

    SELECT @StartTime = GETDATE(),

    @TruncLog = 'BACKUP LOG ' + db_name() + ' WITH TRUNCATE_ONLY'

    -- Try an initial shrink.

    DBCC SHRINKFILE (@LogicalFileName, @NewSize)

    EXEC (@TruncLog)

    -- Wrap the log if necessary.

    WHILE @MaxMinutes > DATEDIFF (mi, @StartTime, GETDATE()) -- time has not expired

    AND @OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName) -- the log has not shrunk

    AND (@OriginalSize * 8 /1024) > @NewSize -- The value passed in for new size is smaller than the current size.

    BEGIN -- Outer loop.

    SELECT @Counter = 0

    WHILE ((@Counter < @OriginalSize / 16) AND (@Counter < 50000))

    BEGIN -- update

    INSERT DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.

    DELETE DummyTrans

    SELECT @Counter = @Counter + 1

    END -- update

    EXEC (@TruncLog) -- See if a trunc of the log shrinks it.

    END -- outer loop

    SELECT 'Final Size of ' + db_name() + ' LOG is ' +

    CONVERT(VARCHAR(30),size) + ' 8K pages or ' +

    CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'

    FROM sysfiles

    WHERE name = @LogicalFileName

    DROP TABLE DummyTrans

    PRINT '*** Perform a full database backup ***'

    SET NOCOUNT OFF

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Cathy,

    We have good information in the BOL regarding this:

    Shrinking a log depends on first truncating the log. Log truncation does not reduce the size of a physical log file. However, it does reduce the size of the logical log and marks as inactive the virtual logs that do not hold any part of the logical log. A log shrink operation removes enough inactive virtual logs to reduce the log file to the requested size.The unit of the size reduction is a virtual log file.

    Virtual log files that hold part of the logical log cannot be freed. If all the virtual log files in a log file hold parts of the logical log, the file cannot be shrunk until a truncation marks as inactive one or more of the virtual logs at the end of the physical log.

    Now, how do you look at the virtual logs and their status? Well, you can run this command DBCC LOGINFO(DBNAME). Your output will be something similar to what i have copied here. The output will have several rows, each row denoting a virtual log file. The extent to which a log file(ldf) can be shrunk can be found out with the help of the 'Status' column. Any rows (virtual log files) that have a value of '0' can be shrunk, but please note that the shrink happens from the end of the log file.

    If you look at the output that i have copied below you will see that we have a value '2' in the bottom 6th row(in bold). So, that means if i currently run the DBCC SHRINKFILE i will be able to eliminate the last five virtual log files only. At that point you can run the BACKUP LOG and then retry your shrink process.

    Field FileSize Startoffset FSeqNo Status Parity CreateLSN

    2 131006464 8192 161 0 64 0

    2 196608000 3080192000 174 0 64 168000018795700001

    2 196608000 3276800000 175 0 64 168000018795700001

    2 196608000 3473408000 176 0 64 168000018795700001

    2 196608000 3670016000 177 0 64 168000018795700001

    2 196608000 3866624000 178 2 64 168000018795700001

    2 196608000 4063232000 0 0 0 168000018795700001

    2 196608000 4259840000 0 0 0 168000018795700001

    2 196608000 4456448000 0 0 0 168000018795700001

    2 196608000 4653056000 0 0 0 168000018795700001

    2 196608000 4849664000 0 0 0 168000018795700001

    Hope that was helpful.

    Regards,

    Ravi

  • What are the scenarios that I am looking at if I shrink the file again, only this time, setting the target size to 12GB? In other words, what are the negatives to doing this and what risks am I taking?

    There are no negatives if you shrink your ldf file and set it to 12 GB. Just ensure that you have autogrow enabled and the growth increment is sufficiently large so that you don't end up with several hundred virtual log files in your ldf.

    It is not guaranteed that your log file will not grow over 12 GB. It is just a general rule that a ldf file can be sized to be around 1/4th the size of the mdf file. But, you do come across exceptions. Take a look at the sizes of your transaction log backups over the last several days and that should give you a good idea of the amount of activity. Depending upon these backup sizes you can adjust your ldf file size.

    Am I better off leaving the log at the size it is now or should I really try to keep the size down to 1/4 of the mdf unless proven otherwise by activity on the database?

    It depends. Does that cause a shortage of free space on your server drive?

    Ravi

  • Ravi (7/16/2009)


    Cathy,

    We have good information in the BOL regarding this:

    Shrinking a log depends on first truncating the log. Log truncation does not reduce the size of a physical log file. However, it does reduce the size of the logical log and marks as inactive the virtual logs that do not hold any part of the logical log. A log shrink operation removes enough inactive virtual logs to reduce the log file to the requested size.The unit of the size reduction is a virtual log file.

    Ravi, I did read BOL and go there often. I knew that truncating did not shrink. My confusion was over whether or not I was supposed to shrink. I had read in other forums that shrinking was not a good idea, but I came across postings here that recommended shrinking. I didn't know when I should shrink or if I should shrink, so I came here for some recommendations.

    Virtual log files that hold part of the logical log cannot be freed. If all the virtual log files in a log file hold parts of the logical log, the file cannot be shrunk until a truncation marks as inactive one or more of the virtual logs at the end of the physical log.

    Now, how do you look at the virtual logs and their status? Well, you can run this command DBCC LOGINFO(DBNAME). Your output will be something similar to what i have copied here. The output will have several rows, each row denoting a virtual log file. The extent to which a log file(ldf) can be shrunk can be found out with the help of the 'Status' column. Any rows (virtual log files) that have a value of '0' can be shrunk, but please note that the shrink happens from the end of the log file.

    If you look at the output that i have copied below you will see that we have a value '2' in the bottom 6th row(in bold). So, that means if i currently run the DBCC SHRINKFILE i will be able to eliminate the last five virtual log files only. At that point you can run the BACKUP LOG and then retry your shrink process.

    Field FileSize Startoffset FSeqNo Status Parity CreateLSN

    2 131006464 8192 161 0 64 0

    2 196608000 3080192000 174 0 64 168000018795700001

    2 196608000 3276800000 175 0 64 168000018795700001

    2 196608000 3473408000 176 0 64 168000018795700001

    2 196608000 3670016000 177 0 64 168000018795700001

    2 196608000 3866624000 178 2 64 168000018795700001

    2 196608000 4063232000 0 0 0 168000018795700001

    2 196608000 4259840000 0 0 0 168000018795700001

    2 196608000 4456448000 0 0 0 168000018795700001

    2 196608000 4653056000 0 0 0 168000018795700001

    2 196608000 4849664000 0 0 0 168000018795700001

    Hope that was helpful.

    Regards,

    Ravi

    This was helpful, but did raise more questions for me :-D. I ran the DBCC command and came up with 1008 rows, none of which have a status of 2. Does this indicate that I am at a decent size and should not need to shrink again for the time being? Since there are so many records, I won't post them here, but if you think I should, please let me know. Thanks Ravi.

    Cathy

Viewing 15 posts - 1 through 15 (of 35 total)

You must be logged in to reply to this topic. Login to reply