MDF File Structure?

  • Afternoon all,

    Was wondering if anyone could help me out or advise.

    I am trying to maintain a large MDF file, which contains approx 240 GB of data and has roughly 175 GB reserved for indexes.

    I would expect the file itself to therefore be not much bigger than 430GB, including some space to allow for growth.

    I can shrink the file down to this approximate size, using the shrinkfile command, leaving ~5% free space.

    However, my problem is, that with any reindex and stats update, the file jumps up to 670 GB in size, yet when I query the file, it still shows the same table allocations as above.

    None of the tables have a large amount of unallocated space so can anyone advise where all the space in the file may be going, or a good method of tracking down what is causing this significant additional growth?

    I don't like to carry out a shrinkfile without a good reason and without updating the statistics afterwards, but 240GB+ of unaccounted filespace seems excessive, particularly when this carries onto test and dev.

    Thanks in advance for any answers!

    (PS, already posted this in the SQL 2000 forum, but I suspect there is very little difference in respect of the actual mdf file, and only potential solutions may be limited by the older version of SQL in use).

  • Rebuilding indexes takes space in the file, unless you specify doing the work in tempdb (check the documentation on rebuilding for the option for that). Once it's done, it leaves the space empty, so that it can use it again next time you ask it to rebuild them. That results in less file fragmentation at the drive level.

    The best bet, usually, is leave the file alone if it grows, don't shrink it repeatly.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (1/4/2011)


    Rebuilding indexes takes space in the file, unless you specify doing the work in tempdb (check the documentation on rebuilding for the option for that). Once it's done, it leaves the space empty, so that it can use it again next time you ask it to rebuild them. That results in less file fragmentation at the drive level.

    The best bet, usually, is leave the file alone if it grows, don't shrink it repeatly.

    OK thanks a lot for the reply!

    I was hoping to find some sort of balance between disk resource and a static file size, but I guess it is better to avoid manually shrinking the file regularly to reclaim the space.

    One issue I may have is that there is currently only 70GB of free space available on the drive.

    If I set the mdf to have a maximum filegrowth, would the reindex and stats update job eventually run into problems, or would it be fine working within the realms of the set file size?

    (i.e. basically without autogrowth turned on, would the operations require the additional space or would it simply work within the available limits?)

  • If you set a max file size, and it runs out of space, it will crash and burn to one extent or another. It's usually recoverable, but it will cause problems if you run out of space available.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply