Defragmenting indices on tables with a small number of pages

  • I'm looking to use the Microsoft provided script in books online for sys.dm_db_index_physical_stats (part D). I'm modifying the section near the top (shown on the last line):

    ...

    SELECT

    object_id AS objectid,

    index_id AS indexid,

    partition_number AS partitionnum,

    avg_fragmentation_in_percent AS frag

    INTO #work_to_do

    FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'LIMITED')

    WHERE avg_fragmentation_in_percent > 10.0 AND index_id > 0

    AND page_count > SOME_MINIMUM_NUMBER; -- proposed modification

    ...

    I've heard it said that if the page count is too low, it's not worth doing a rebuild or even a reorganize, but not sure if there is a rule of thumb as to a minum value to use? Anyone utilize this or similar types of scripts in their DB maintenance plans/jobs?

    Thanks.

    Gaby
    ________________________________________________________________
    "In theory, theory and practice are the same. In practice, they are not."
    - Albert Einstein

  • I read somewhere that the limit was 8, but I'm not sure how universal that is.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • I've seen the figure of 1000 pages quoted because even then the table would only be 8MB big, which is pretty small.

    I suggest you find some small tables and see what happens when you de-frag them.

    ---------------------------------------------------------------------

  • George's recommendation of 1,000 pages seems to be inline with MS recommendations. Anything smaller than 1,000 pages and it's just as fast to do a table scan as use the index.

    From MS article:

    http://technet.microsoft.com/en-us/library/cc966523.aspx

    Generally, you should not be concerned with fragmentation levels of indexes with less than 1,000 pages. In the tests, indexes containing more than 10,000 pages realized performance gains, with the biggest gains on indexes with significantly more pages (greater than 50,000 pages).

  • 1000 page is the usual rule-of-thumb. Smaller than that and the table will probably be in memory. Fragmentation is a disk issue, it doesn't affect pages in memory.

    The other thing to consider is how the table's used. Fragmentation causes problems when the table (or part of the table) is scanned. It is not an issue if all that's ever done are single-row seeks.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (2/26/2009)


    1000 page is the usual rule-of-thumb. Smaller than that and the table will probably be in memory. Fragmentation is a disk issue, it doesn't affect pages in memory.

    The other thing to consider is how the table's used. Fragmentation causes problems when the table (or part of the table) is scanned. It is not an issue if all that's ever done are single-row seeks.

    Thanks Gail. I guess there is no wholesale approach to defragmentation (like I found out when trying to defrag by keeping online, doesn't work for certain column types). At the very least, I plan to schedule it to run quarterly, once a month if warranted, and in rare cases once a week. We have an older database (before I came) that does that once a week rebuild. It's mainly a reporting/extraction type server fortunately so doesn't impact much of the business.

    Gaby
    ________________________________________________________________
    "In theory, theory and practice are the same. In practice, they are not."
    - Albert Einstein

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply