SHRINKFILE all logs from one stored procedure

  • I'd like to run SHRINKFILE for all the logs in my SQL Server from one stored procedure.

    I can run the code from below in Query Analyzer of the database I'm trying to shrink, but if I try to run this for a different log than the query window I'm in, I get the message "Could not locate file 'csvlongterm_log' in sysfiles."

    Backup LOG csvlongterm WITH NO_LOG

    DBCC SHRINKFILE ('csvlongterm_log',5,TRUNCATEONLY)

  • I think I figured it out.  Need to do a USE xxxxx because SHRINKFILE will only work if you are using the database you want to shrink.

  • Just a note... doing a shrink file may actually kill your performance in the near future...  it's going to grow again... it didn't get that big by accident.  When it grows, what ever process is trying to use the log will have to wait while it grows.  It will also cause some pretty unreasonable disk fragmentation which will also kill performance over time.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Thanks Jeff.  I'm actually doing this on a development machine, but I have a question for the production machine.  On of my SQL applications moves in close to a million transactions a day.  That data is processed, then used.  At the end of the day almost every table is truncated.  About 30,000 small records are stored.  If I didn't do a shrinkfile on that log it would get massive.

    My question is, how can I maximize performance and keep the log file reasonable.  On the disk fragmentation I'm assuming a weekly defrag will solve that? 

  • Would I be better off on that database just changing the recovery method from full to simple?

  • Yeah... kinda... I normally setup a separate database for "staging tables" of large volumes of data being imported.  I set the recovery mode to "Simple", as you suggest, and I don't backup any of the data... it's all temporary.  Since it's on the same box/instance as the "real" data, there's not much of a performance hit in moving the gleaned data.  Since I use BULK INSERT or BCP to pull the data in, there's not much in the form of a log file, either.

    If you're transactions originate from a GUI, this may still help because all of that data is "temporary" until your processes glean it for what you need to store permanently because you can set the DB to "Simple" without taking the risk of not having point-in-time recovery on the main DB.

    Either way, the best bet is to correctly size the MDF and LDF files AND to correctly size TempDB, as well.  Currently, we force TempDB to a size of 9 gig on bootup... it never has to grow.  In most cases, our MDF's and LDF's never have to grow, either.  We evaluate the DB's twice a year and, if there's less than 6 months of estimated growth, will resize the DB's for a year's worth of growth during a "quiet" time.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply