File Size Issues

  • MD (5/14/2008)


    John is correct, this can be done without the use of backup devices.

    To add...

    If you're using a device, the performance of the backup job (not the backup operation itself) gets slower as you append more backups to the device. Every time you append a backup, the device is queried for existing backup sets and other information so that it can append correctly.

    Your friendly High-Tech Janitor... 🙂

  • OK I've got it if you set the database to full recovery and don't do any transaction log backups the .ldf file just keeps growing until failure. When you do eventually get around to doing a transaction backup it will try and backup all transactions in the .ldf file up to the point of the last full backup, hence the large backup size.

    I have now setup a backup plan on the largest database which has done a full backup (8GB) followed by a differential (75MB) 30 minutes later. Every 20 minutes there have been transaction logs each around 20MB. All seems to be running as expected.

    However my .ldf file is still showing as having a physical size of 19GB whereas in reality I believe the actual amount in use is a lot less. My question is how do I now shrink the physical size of the .ldf file?

  • Only shrink your log file if you're sure it's never going to grow to its current size again. Otherwise you waste processor and disk resources when it re-grows and you cause physical fragmentation on your disk. If you still want to proceed, you can do it either through the GUI or with T-SQL. I think the command is DBCC SHRINKFILE - you'll still be able to use it in SQL Server 2005 but it may have been superseded by something else. Search in Books Online for "Shrinking a file" and it'll tell you how to do it.

    John

  • Also, make sure to take a full backup after you shrink the file. VERY IMPORTANT. You do this to maintain point in time recovery capabilities.

    Marvin Dillard
    Senior Consultant
    Claraview Inc

  • OK opened databases node and right clicked on database selected Tasks|Shrink|Files to open the shrink dialogue. Have tried Release unused space and Shrink file to but the file has remained at 19GB what am I doing wrong? Do I have to wait?

    Edit: Oh and yes I have done a full backup too!

  • Was there a transaction running at the time you tried shrinking? What was the original size of the log file (ie when you created the database)?

    John

  • I ran everything again just to make sure there wasn't a transaction log running an low and behold the file has shrunk.

    Many thanks.

    Darryl

  • Now that the file has shrunk, perform a new full backup. This will be your starting point for any recovery after the shrinking of the file.

    Marvin Dillard
    Senior Consultant
    Claraview Inc

  • I have discovered a what is causing my problem. There is a weekly task which runs through checking database integrity, shrink database, reorganise index, rebuild index, update statistics, clean up history and a maintenance clean-up task. This was setup on the database by someone else so it's not me! Anyway this ran last night as last week it failed and as you can imagine the ldf file is now huge (19GB) the transaction backup has then also tried to backup all these transactions causing my device file to reach 25GB. I have rescheduled the maintenance task to run before the main backup and as the main backup overwrites the device it should never attempt to backup the large transaction log.

    My questions are as follows:

    Is there a way to stop the ldf file from logging these transactions?

    Is this maintenance task necessary?

    Can anyone see a problem with the maintenance being scheduled before the full backup?

  • Reindexing automatically updates the statistics, so you don't need to include both. I don't know exactly how a maintenance plan decides what it is going to reindex, but if you design it yourself you can have a greater amount of control. You can look at the fragmentation on each index and decide whether to reorganise, rebuild or leave alone based on that. Also, if you allow a ten second pause, for example, between rebuilds of individual indexes, that will give the log a chance to clear down.

    John

  • I've taken a look at the fragmentation using sys.dm_db_index_physical_stats, and the top 50 are between 87% to 75% fragmented. It looks like the maintenance plan just re-indexes everything. Like you say I would like to target the rebuild/reorganise to where it is needed as I know that there are tables in the database that are no longer used.

    Books online recommends where fragmentation is 30% then rebuild. Do you or anyone else know how to script this?

    Also the database is only in use throughout the day so the transaction logs are only performed between 8:10AM and 910PM. The maintenance task runs at 2AM so the log file won't get a chance to clear down.

  • One way of doing it is to build a script to perform the reindexing and then run it. Here's some pseudo code; yours will no doubt end up more sophisticated than this. For example, you still may want to update statistics on tables that you don't reindex. And if you rebuild the clustered index then I think I'm right in saying you don't need to waste time rebuilding the non clustered indexes on the same table.

    SELECT @sql = @sql + ' REBUILD INDEX ' + IndexName + ' TRUNCATE LOG MyDB'

    FROM sys.dm_db_index_physical_stats

    WHERE Fragmentation > 70

    EXEC sp_executesql @sql

    SELECT @sql = @sql + ' REORGANISE INDEX ' + IndexName + ' TRUNCATE LOG MyDB'

    FROM sys.dm_db_index_physical_stats

    WHERE Fragmentation BETWEEN 30 AND 70

    EXEC sp_executesql @sql

    This takes care of the fact that your log isn't backing up during the reindex. Remember to run a full backup immediately afterwards if you do it like this.

    John

  • Yes thanks for this.

Viewing 13 posts - 16 through 27 (of 27 total)

You must be logged in to reply to this topic. Login to reply