Afternoon all,
Was wondering if anyone could help me out or advise.
I am trying to maintain a large MDF file, which contains approx 240 GB of data and has roughly 175 GB reserved for indexes.
I would expect the file itself to therefore be not much bigger than 430GB, including some space to allow for growth.
I can shrink the file down to this approximate size, using the shrinkfile command, leaving ~5% free space.
However, my problem is, that with any reindex and stats update, the file jumps up to 670 GB in size, yet when I query the file, it still shows the same table allocations as above.
None of the tables have a large amount of unallocated space so can anyone advise where all the space in the file may be going, or a good method of tracking down what is causing this significant additional growth?
I don't like to carry out a shrinkfile without a good reason and without updating the statistics afterwards, but 240GB+ of unaccounted filespace seems excessive, particularly when this carries onto test and dev.
Thanks in advance for any answers!