Selective Index Maintenance

  • Hello,

    Does anyone have any suggestions on how to handle index fragmentation on large DBs?

    We have a database of around 200GB which has fragmentation problems, given the size of the database, the available maintenance window and the fact we are log shipping, we are not in a position to re-index the whole database in one go so need to look at scheduling maintenance over the week.

    I've created a script which will allow me to do this based on fragmentation levels of the respective indexes but I have a question on what's the best option when deciding which indexes to target. At the moment I'm just taking the first x number per day and working through them. Is this the best option or should we pick the most frequently used indexes based upon user_Scans, Seeks and Lookups and work on these first.

    I'm also unclear on the impact of the following:

    1, Where a table has a large number of indexes, is it better to reduce fragmentation across all indexes or does it not matter if only a sub set of the indexes are rebuilt.

    2, We have a large number of unused indexes, (the removal of which is the next project), is there any benefit in rebuilding these indexes where fragmentation exists. My guess is not if the index is not used but wouldn't mind someone clarifying that for me.

    Has anyone else been in this position before and would like to share their experience?

    Many thanks in advance.

  • If an index is not used, the fragmentation on that does not matter as far as the performance of a query is considered. It would be better to target the indexes which are frequently used and reindexing them based on the fragmentation level would be ideal.

    Also have a look at the Index Optimization script available here[/url]. It provides you with lot of options.

    Pradeep Adiga
    Blog: sqldbadiaries.com
    Twitter: @pradeepadiga

  • Hi Adiga,

    Thanks for the info.

    We were using Ola's script for a while but as our databases grew over time the need to batch the rebuilds makes it unsuitable in our situation, unless of course it's now possible to restrict rebuild activity in later versions.

  • You can also set a maintenace task or job to rebuild/reorganize the only required indexes OR might be on the required database instead of selecting all DBs

    -------Bhuvnesh----------
    I work only to learn Sql Server...though my company pays me for getting their stuff done;-)

  • Due to the size of our database and the small maintenance window we have, it has been most beneficial to rebuild only those indexes with a threashold of say 30% fragmentation. This sort of maintenance plan will hit the most commonly used and most desperately needed of a rebuild.

    Chris Powell

    George: You're kidding.
    Elroy: Nope.
    George: Then lie to me and say you're kidding.

  • In addition to limiting the indexes that you perform maintenance on based on fragmentation, I'd also limit based on the number of pages. Defragging or rebuilding an index less than 8 pages in size is a complete waste of time, regardless of that indexes fragmentation. Also, probably, depending on your system you could bump that up even higher. Microsoft recommends 1000 pages. I'd probably go a bit more conservative and say 100 pages, but you should test it out on your system. That will help you reduce the amount of processing you perform.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • ola.hallengren.com!! Full maintenance suite, highly configurable and documented! Best of all it is FREE!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Thanks for the feedback everyone.

    Grant your comment on number of pages is a very valid one, We've settled on the 250 page mark for the moment. The recommendation of 1000 has proven to be too high for us in the past, especially on tables of around 800 pages, took us a while to work out why the performance wasn't improving after reindex work.

    So the plan is, select all indexes that are fragmented more than 10% and have more than 250 pages. Split these into 2 separate groups, one for re-org where the fragmentation is between 10 and 30%, where more that 30% they get a full rebuild. However the resulting workload is still too big to fit into the maintenance window. So the work needs to be batched over a number of days.

    Intention is to then take the top 20% from each group and process them on a nightly basis (until the remaining number is less than 10 for rebuild and 20 for reorg) as this seems to be the suitable number we can process in time.

    Thanks to everyone who recommended Ola Hallengren's maintenance script. It is a great process and we are using it on other systems, however it's not suitable for this particular application due to the fact we cannot process all indexes in one batch, the main issue for us is the log shipping. We are unable to process huge log files as the secondary server doesn't have the capacity to handle them.

  • The latest version of Ola's index maintenance sproc now has the ability to specify indexes, so you can set up rolling jobs to cycle through them and not crush your tlog.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • I'm curious how you determine that 1000 pages is too high. Paul Randal mentioned this week that he made up that number, as an educated guess when talking with various customers, but he also said that he might recommend it to be higher these days.

    However, the caveat, as always, is your mileage can, and will, vary on the recommendation.

    I'm just curious what has led you to 250 pages, and do you have a good methodology that you might share?

  • Thanks Kevin, I'll have to revisit Ola's script to check out the latest version it would appear.

    Steve - No real science involved at all, we have the same base application supporting different markets, these are of sizes ranging from 10GB to 200GB. As I mentioned in an earlier post we have been using Ola's maintenance with default settings and found that the 1000 page limit meant that the indexes were being excluded from the maintenance job on the smaller systems, with performance suffering as a result.

    This is what led us to us the figure of 250 which is a suck it and see number but based on the generally smaller sizes of these systems. The change has yet to go into production but obviously we will be maintaining more indexes using the lower limit so the performance should hopefully improve. On these smaller systems the issue of log shipping is less of a problem so we are able to keep on top of the maintenance and don't need to worry quite so much. It's the bigger databases that are causing the problems and need to be batched.

  • Thanks for the update. Hope it works out.

    So 250 is a WAG, and you are monitoring, is that right? Are you also monitoring page splits?

  • Smellypom (11/4/2010)


    Thanks for the feedback everyone.

    So the plan is, select all indexes that are fragmented more than 10% and have more than 250 pages. Split these into 2 separate groups, one for re-org where the fragmentation is between 10 and 30%, where more that 30% they get a full rebuild.

    I spent a considerable amount of time studying a Microsoft White Paper on the subjects of rebuilding and/or reorganizing indexes. Some things stand out in my memory:

    Regarding Index Reorganization (Defragmentation):

    1. Index reorganization takes approximately 8 times longer to execute than an index rebuild operation.

    2. It will not defragment index pages if they are "write locked" by another transaction.

    3. It does not rebuild/recalculate data distribution statistics.

    Regarding Index Rebuilding:

    1. It produces improved query performance for indexes of over 1000 pages that are at least 20% fragmented.

    2. It automatically rebuilds/recalculates the data distribution statistics associated with the corresponding table.

    Through experimentation, I've learned that:

    1. An appropriate setting of your index fill factor for each index will help reduce page splits and improve overall server performance.

    2. It does not appear to be useful to set your fill factor to less than 40%.

    3. Fill factors of 0% or 100% are synonymous. The fill factor is 100%.

    4. Always set SORT_IN_TEMPDB = ON to prevent use of your database data file for the index rebuilding operation if there is insufficient RAM to hold the entire source table and its index.

    5. I use an Administration database to maintain a persistent record of the calculated fill factors every time our automatic indexing jobs are executed. I use a proprietary formula to calculate a "floating fill factor" whose value is determined by experience with the results produced by setting prior fill factors. Each successive execution of the automatic index jobs refines this number to an optimum value for the index. This optimum value can change with the execution of each index rebuilding job depending on the current load on the server and the database, and the type of data being inserted, updated, or deleted from the source table.

    Hope this helps,

    LC

  • WAG? If that's what I think it means then yes, most definitely! however I am open to suggestions all round which is the reason for the post in the first place...

    As for monitoring, we are tracking index state on a daily basis. Page splits was a no but will soon be a yes 🙂 anything else we should be looking out for?

    LC - Thanks for you comments, should I read that as rebuild should be used in place of re-org, given the extra time taken and the need to run update stats?

  • Smellypom (11/5/2010)


    As for monitoring, we are tracking index state on a daily basis. Page splits was a no but will soon be a yes 🙂 anything else we should be looking out for?

    LC - Thanks for you comments, should I read that as rebuild should be used in place of re-org, given the extra time taken and the need to run update stats?

    Currently, index fragmentation is all I've based my algorithms on. Index fragmentation is, to a large degree, a by-product of page splits. If you rebuild your indexes with an appropriate fill factor, you are helping to reduce the number of page splits which improves overall server performance.

    In answer to your final question: Yes, definitely. I would only rebuild your indexes and never reorganize (defragment) them. If you rebuild your indexes, you won't need to, at that point in time, run a job specifically to update your data statistics. You may wish to update your data statistics independently of your index rebuilding jobs; for various reasons, this may be necessary. If so, I recommend that you use the "sp_UpdateStats" stored procedure. This is an "intelligent" stored procedure that, from what I've read about it, only updates data statistics if they need to be updated. For this reason, it is very efficient and generally executes very quickly.

    An example of my company's SQL job script to execute this stored procedure:

    USE Master

    EXEC sp_UpdateStats

    USE Model

    EXEC sp_UpdateStats

    USE msdb

    EXEC sp_UpdateStats

    USE TempDB

    EXEC sp_UpdateStats

    USE Database1

    EXEC sp_UpdateStats

    USE Database2

    EXEC sp_UpdateStats

    USE Database3

    EXEC sp_UpdateStats

    etc.

    _____________

    We execute this stored procedure on all of our databases every 4 hours.

    LC

Viewing 15 posts - 1 through 15 (of 24 total)

You must be logged in to reply to this topic. Login to reply