how to shrink this DB without suffering downtime?

  • I've been wandering in circles looking at a bunch of postings without finding a conclusive recommendation. We've got a DB with a lot of unused space and the datacenter is whomping us with a big disk space bill. I'm thinking I'd like to do a micro-shrink over time and rebuild indexes after each micro-shrink. Ideas?

    I can't take this DB offline for any time at all, but in the evenings, the usage drops and I don't have to worry so much about impacting performance. What would be a non-crushing method of shrinking the unused space in this DB? Please see attached image for an idea of what I'm trying to reduce. (BTW- this was generated using the sp_SDS stored procedure).

    Any advice appreciated!

    mtf

    PS- the attached figures are in MB.

  • Sounds weird but I might be looking more at the backup strategy first. This is usually where most of the space is "wasted".

    How much space can you really get back without needing to grow again anytime soon?

  • Schedule a sqljob to run between a particular time to shrink the file. And an other sql job to stop the running shrink job once the time crossed. This way every day some amount of free space is released to the operating system. I have tested the scenario in sqlserver 2000 system in production.

  • Please decide on database shrinking wisely.

    Best Practices:

    Consider the following information when you plan to shrink a database:

    • A shrink operation is most effective after an operation that creates lots of unused space, such as a truncate table or a drop table operation.

    • Most databases require some free space to be available for regular day-to-day operations. If you shrink a database repeatedly and notice that the database size grows again, this indicates that the space that was shrunk is required for regular operations. In these cases, repeatedly shrinking the database is a wasted operation.

    • A shrink operation does not preserve the fragmentation state of indexes in the database, and generally increases fragmentation to a degree. This is another reason not to repeatedly shrink the database.

    • Unless you have a specific requirement, do not set the AUTO_SHRINK database option to ON.

    For More: http://msdn.microsoft.com/en-us/library/ms190488.aspx

  • I have been successful in shrinking log files during production hours and not impacting the performance too much due to locking issues. I use a script that will shrink the file in small increments. If the file has 50 GB of free space then I never shrink it in one shot. Instead I do it in small increments of may be 1 GB or 500 MB. Use the attached script by modifying the logical file name, the desired amount of free space at the end and also the increments.

    http://saveadba.blogspot.com/

    Blog
    http://saveadba.blogspot.com/

  • Thanks for this script. What values would you to call the script if you wanted to shrink it in a 1GB increment?

    -- mtf

  • In the beginning of the script there are three lines that you can edit.

    -- Set Name of Database file to shrink

    set @DBFileName = 'yourlogicalfilename'

    -- Set Desired file free space in MB after shrink

    set @TargetFreeMB = 1024

    -- Set Increment to shrink file by in MB

    set @ShrinkIncrementMB = 1024

    @TargetFreeMB is the amount of free space to be retained in the data file after the shrink. This depends on the used space. I would keep at least 10-20% of free space.

    And set the @ShrinkIncrementMB to 1024 if you want to shrink it in increments on 1 GB. If it is taking a long time to shrink 1 GB then make it 500 MB and so on.

    http://saveadba.blogspot.com/

    Blog
    http://saveadba.blogspot.com/

  • Thanks for the info. I think this is my last question....

    Should I rebuild the indexes after every incremental shrink?

    -- mtf

  • mrTexasFreedom (12/14/2011)


    Thanks for the info. I think this is my last question....

    Should I rebuild the indexes after every incremental shrink?

    -- mtf

    It takes scant seconds to shrink a 50GB logfile during "quiet" times. You shouldn't have a problem with your log file.

    If your shrink is also a "reorganize" instead of just a "free unused space", then you're fragmenting indexes. You could end up in a vicious cycle. Rebuilding indexes needs about an extra 1.5 times the size of the index to rebuild it.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply