Log files are getting huge

  • I've got SQL Server 2005 Developer edition installed on my development PC.  I've also got copies of our databases installed on my PC for testing purposes.  Three of these databases are pretty small, as far as the data that they contain, is concerned.  However, I have discovered that there log files have gotten huge.  As in 13 GB in size on average!  This is ridiculous, as I hardly, if ever, use these databases.  I have tried truncating the log files, but that doesn't seem to help.

    So, what can I do, please, to cut down the size of these log files?

     

    Kindest Regards, Rod Connect with me on LinkedIn.

  • As these databases are on your development PC, the first thing I would do is change the recover mode from full to simple.  This will allow SQL Server to truncate the log as soon as any transactions you may run against them are checkpointed.  Second, after changing the recovery mode to simple, shrink the database and log files.  This should give you back the space you have lost to the large transaction log files.

    hth!

  • OK, that did the trick, Lynn, but what exactly does setting the recover mode to simple do?

    And I've noticed that some of the other databases, that aren't giving me any problems, bulk logged as the recovery mode - what is that?

     

    Kindest Regards, Rod Connect with me on LinkedIn.

  • Never mind about telling me about the difference between simple, full and bulk logging.  I looked it up.

    More important, in a production environment, is full or bulk logging to be preferred?

     

    Kindest Regards, Rod Connect with me on LinkedIn.

  • #1 DBA Response: It depends.

    Honestly, some production systems don't need up to the second transaction recovery so we set the recovery model to simple. Some production systems, we care about every bit of data that goes into them so we set model to full and then perform regular backups of the logs in order to both protect them and keep them from growing to obnoxious size.

    Lots more detail on this topic of course.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • "It depends" is the correct response.   Myself, I'd always choose FULL unless I had a really strong reason not to.

    I suspect your trans logs are huge because you are not backing them up in a manner that allows them to clean up.  Set up a job to do transaction dumps several times a day, and a job to do a full dump once a day.  That will allow the transaction log to release records.  

    (Your backup can be more complicated if you want - I do trans dumps every 10 minutes, differentials 5 nights a week and fulls one night a week.  But the backups are the key in FULL to allowing the transaction log to release records).

     

    Roger L Reid

  • These two steps always works for me in Sql 2000 (backup log then shrink it):

    BACKUP log <myDatabaseName> WITH TRUNCATE_ONLY

    dbcc shrinkfile('myDatabaseLogName')

     

     

     

  • But, of course, never on a production system where we need to have up to the minute data recovery, right.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • And hopefully, this is also followed immediately with a full backup.

     

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply