DBCC REINDEX v. DROP & CREATE

  • I am a newbie here, so no flamming please.  I have been working with one of our programmers to re-write our job that re-indexes our large tables  for our main production database.  The old job manually called each table & index and dropped and created it, a process that took about 40 minutes.  With the new job we decided to try DBCC REINDEX, and just let it rebuild all indexes in each table.  The new job completes successfully, but only takes 1 second.  I don't think that this is correct.  So, I have two questions:  1.) Should I expect to see an improvement in performance from DROP / CREATE to REINDEX? 2.) Is there a way to check if the job is indeed rebuilding the indexes?

     

    Thanks!!

  • First you should try DBCC INDEXDEFRAG, because this one won't lock the index while defragmenting it.

    With DBCC ShowContig you can know wich indexes are fragmented. I have decided that if you have a Logical Scan Fragmentation below 70, then the index is fragmented and run DBCC INDEXDEFRAG.

    DBCC INDEXDEFRAG won't always work. If after running it a few time, if you see that it is still fragmented, then you should use DBCC REINDEX.

    For help with parameters, check BOL, or ask again.

  • Thanks racosta!!

     

    I will give your suggestions a try.  Any idea if I should be seeing that large of a difference in time between reindex and drop / create??

  • Racosta,

    Sorry to ask but don't you mean a Lgical Scan Fragmentation above 70%  which you might of meant above 30% actually.

     I would think a Logical Scan Fragmentation value at 70% is bad as well. It is my belief you want this number to be as close to 0.0% as possbile.

    Also remember that Logical Scan Fragmentation value should be ignored for tables that do not have a clustered index.

     

    Kevin Holcomb

    SA/DBA

     

     

     

  • You should only use DBCC INDEXDEFRAG on indexes with less than 1000 pages, and the if it does have more than 1000 pages then

    Logical Scan Fragmentation > 20% OR Avg. Page Density IS < 60% (small db size) 55 (small-large db size). I think Racosta was getting mixed up with page density which should be defragged when BELOW 60%

    see Microsft official defrag indexes white paper for more info.

  • You mean here: http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx ?

    Where it states almost the exact opposite, or am I missing your point?

    When analyzing the output from DBCC SHOWCONTIG, consider the following:

    Fragmentation affects disk I/O. Therefore, focus on the larger indexes because their pages are less likely to be cached by SQL Server. Use the page count reported by DBCC SHOWCONTIG to get an idea of the size of the indexes (each page is 8 KB in size). Generally, you should not be concerned with fragmentation levels of indexes with less than 1,000 pages. In the tests, indexes containing more than 10,000 pages realized performance gains, with the biggest gains on indexes with significantly more pages (greater than 50,000 pages).

    High values for logical scan fragmentation can lead to degraded performance of index scans. In the tests, workload performance increased after defragmenting when clustered indexes had logical fragmentation greater than 10 percent, and significant increases were attained when logical fragmentation levels were greater than 20 percent. Consider defragmenting indexes with 20 percent or more logical fragmentation. Remember that this value is meaningless when reporting on a heap (Index ID = 0).

     

  • Read the following following and scripts... modify the script to write the fragemtation info to table before and after reindex and reindex execution times too...

    http://www.mssqlcity.com/Articles/Adm/index_fragmentation.htm

    You may or may not see the performance difference... because it depends on the db/server usage...

     

    MohammedU
    Microsoft SQL Server MVP

  • I sense a measure of confusion here ?  a dbcc dbreindex against a table which has a clustered idnex will drop and recreate the clustered index and all secondary indexes. if the job is running that quick I'd guess it's been coded wrong - here's a simple loop which does a dbcc dbreindex for a database

    set nocount on

    declare @table1 table (Numkey int identity(1,1),Tname sysname)

    declare @count int, @cmdstring varchar(100)

    --

    insert into @table1(Tname)

    select name from dbo.sysobjects with (nolock) where xtype='U'

    set @count=1

    while @count<=(select max(Numkey) from @table1)

    BEGIN

     select @cmdstring=Tname from @table1 where numkey=@count

     print @cmdstring

     dbcc dbreindex(@cmdstring)

     set @count=@count+1

    END

    --endwhile

    index defrag is not the same and may not be suitable for you - it can also produce a much bigger transaction log.  practical tests have shown BOL to be inaccurate in it's description of index defrag.

    I usually figure anything with >10% clustered and >20% secondary  fragmentation needs rebuilding and if I have the window I'll rebuild every index regardless every night. If you start rebuilding individual indexes don't forget  to update stats, dbreindex "tablename"  does all stats on a table, any other variation does not.

     

     

     

     

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply