Slow queries after upgrading from SQL 2000 to SQL 2008

  • just one small idea - grab the statistics IO. That will at least point at the really heavy tables - it might only be one of them. That may give a clue as to how you can limit the whole result set in the first place and trim a lot of fat. Then look at the fields from that which are used, and the restriction clauses. Once you have that info the indexes should be looked at to see where they can be used but aren't.

    This may mean you need to go beyond a hint and give it a full rewrite in order to grab the smallest number of rows first - eg the submittals for the whole period perhaps and bung em in a temp table, but don't worry about that till you need to!

    I found on my upgrade that it was especially bad on any views, but we ended up doing a bunch of table and index scans that shouldn't have been there, and forcing it to use a more appropriate index made a massive difference. Good luck - it looks like a nasty query to rewrite :s

  • If its one specific query, I would look at the Database Tuning advisor. From experience of upgrading to SQL 2008, this i what you should do.

    Update statistics on every table with fullscan, this should only be done after the upgrade. It will take some time and also hit your database server depending on how big it is.

    Rebuild each and every index on your database, rebuild not reorganise. Note that you should update statistics before rebuilding index.

    I done upgrade to SQL 2008 as well, and in one case a query that used to take seconds to run started taking 18 minutes to run, and in the end, it was a very badly written query which still worked, but rather than trying to reionvent the wheel, I used DTA and got it down to less than a minute.

    So as part of the upgrade, some aspects will run faster, some will be slower and some will remain thesame, just make sure testing covers as much as possible.

  • Thanks, guys! I'll try checkdb, updateusage, and updatestatistics again. I already did that, but I did it using a maintenance plan in SSMS, so maybe doing it this way in SQL will make the difference.

    Again, though - I really don't want to go down the road of optimizing this query. I appreciate the suggestions, but I can handle that myself. The issue is that this is not the only query I've identified that is orders of magnitude slower. I've found 4 such queries so far, and we have hundreds that we use in our application. So even if I optimized all the queries I know of, there might be others that I don't know of. I'm looking for an underlying issue that will fix all of these. Otherwise I may have queries that used to take 1 second and now take 5 seconds, and nobody complains and we never know.

  • Off course it is good to try to solve the general findings for simular objects.

    On the other hand, consider it like an actual engine, the more high performing it is, the bigger the penalty for not tuning it the proper way.

    Another "common" issue after upgrade is the engines sensitivity for the effects of parameter sniffing.

    since we don't have the full sproc, there is no way for us to check if this may come into scope.

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • dpt,

    I have suggestion to run the performance dash board and find the missing indexes create all the missing indexes and test it in ur testing envorinment.

    If it's good go ahead.

    Muthukkumaran Kaliyamoorthy
    https://www.sqlserverblogforum.com/

  • muthukkumaran (12/16/2009)


    dpt,

    ... create all the missing indexes and test it in ur testing envorinment.....

    I wouldn't do that, you'll end up having way to much similar indexes !

    You need to investigate the index proposals and pick the one(s) you think will serve your system best.

    Check the suggested ones having the same key columns, and investigate / add the included columns later.

    Creating indexes only using the key columns, may generate very good filter means. Adding the included columns may avoid physical IO to support your predicates.

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • I wouldn't do that, you'll end up having way to much similar indexes !

    Yes,I agree with u.

    You need to investigate the index proposals and pick the one(s) you think will serve your system best.

    Check the suggested ones having the same key columns, and investigate / add the included columns later.

    Creating indexes only using the key columns, may generate very good filter means. Adding the included columns may avoid physical IO to support your predicates.

    Thanks for info ALZDBA

    That's y i told to dpt try it in ur tesing server.Because we don't know his application(sps) 😀

    Muthukkumaran Kaliyamoorthy
    https://www.sqlserverblogforum.com/

  • Thanks, but again - please, no more suggestions about how to optimize this particular query. I don't want to go down that road.

    I am going to try Steve's suggestion earlier:

    1. DBCC CHECKDB([dbname]) with data_purity

    2. DBCC UPDATEUSAGE([dbname]) -- I saw you ran this already, but this is usually required when there are data purity issues.

    3. UPDATE STATISTICS [tableschema].[tablename] WITH FULLSCAN -- I saw you updated statistics, but if you used sp_updatestats, it would only use a sample of the data, thus not allowing SQL to build the most optimal query plan.

    I am hopeful that will help. I did the update stats before with fullscanm, but didn't do the checkdb with data purity. I have to wait until the weekend to do this, , maybe it will help. If not, I guess I'll open a ticket with Microsoft - to see if they can point to an underlying issue that will either fix this or help identify the general case of queries that I need to focus on for rewrite. This is pretty annoying though - that SQL 2008 seems to not be fully backwardly compatible with 2000.

    - Dave

  • dpt (12/10/2009)


    I upgraded a server from 2000 to 2008. For the most part, everything has gone fine. There is one issue. We have a number of long running queries that are used in reports. These queries run overnight and select their results into tables that the reports feed off of. Most of these queries run fine on 2008. There are two in particular, though that are waaay slower - they went from 30 seconds or so on average to over an hour. Any idea why this would happen or how I can fix? I did rebuild all the indexes on the affected database and also updated statistics. Anything else I should do? Again, other queries are fine, including some that used to take longer than these two.

    I did try the "Database Engine Tuning Advisor" - and that recommended a lot of indexes. I'm not familiar with the DETA - I'm used to the Index Tuning Wizard in SQL 2000. The DETA recommended a ton of indexes and I'm not sure how to tell which ones are most important. But I'm thinking this shouldn't even be neccessary since the queries had all the indexes they needed to run quickly on SQL 2000...

    One thing that may be relevant - the "select into" selects from tables in one database into tables in another database. Did something change in SQL 2008 that would cause problems in the execution plans for this type of query?

    Any advice?

    - Dave

    Quoting your original post for a reason. Just because everything worked properly under SQL Server 2000 with the indexing scheme there doesn't mean there couldn't be changes with the move to SQL Server 2008.

    When we moved from SQL Server 2000 to SQL Server 2005 we experienced application time-outs when opening PO's in our finance system in SQL Server 2005 where we had no problems in SQL Server 2000. I used DTA in an attempt to help solve the problem but not with very much luck. It took our PeopleSoft Admin using PeopleSoft trace to trace the app from a different point of view. That trace found a table that had NO INDEXES defined. Once we found that table and properly indexed it, our time-out problem went away.

  • S.K. (12/15/2009)


    I apologize if I missed some of the information posted earlier, but as a general practice when upgrading, we do the following at my company:

    1. DBCC CHECKDB([dbname]) with data_purity

    2. DBCC UPDATEUSAGE([dbname]) -- I saw you ran this already, but this is usually required when there are data purity issues.

    3. UPDATE STATISTICS [tableschema].[tablename] WITH FULLSCAN -- I saw you updated statistics, but if you used sp_updatestats, it would only use a sample of the data, thus not allowing SQL to build the most optimal query plan.

    This query is database specific, but will generate all of the update statistics statements for the particular database:

    select 'UPDATE STATISTICS ' + QUOTENAME(s.name) + '.' + quotename(t.name) + ' with fullscan'

    from sys.tables t

    inner join sys.schemas s

    on t.schema_id = s.schema_id

    4. Rebuild all indexes.

    In my experience, the update statistics with fullscan has been the most effective when doing upgrades and seeing extreme slowness right after the upgrade.

    Regards,

    Steve

    How would you suggest I do the rebuild indexes? I normally just do it with a maintenance plan (see the attached screenshot). Does this work, or would you suggest I do it a more advanced way via tsql?

  • Hello, did you ever find a resolution for this issue?

    thank you

  • I' am with a very similar issue.

    We have migrated from 2005 to 2008, tough the database is with compatibility level=80.

    Some views have got slow and are with different execution plans. Change from index seek to index scan on the same index

    A nvarchar parameter is passed to view.

    And when i change the type to varchar the execution plan came back to index seek.

    I have investigate the server, database and login SET options (ex.:ANSI_NULLS, ANSI_PADDING) and the are all equal. But when the parameter used on the view is nvarchar i have noticed that the set options implicit change.

    Picking the query under the view and passing the nvarchar parameter the plan came back to use the index seek and the SET option comeback to be equal.

    Another strange behavior is when pick the query under the view to execute and put as derived table passing the nvarchar paramer(Ex.: select * from(viewquery) where column=@nvarcharparameter) the SET options come back to implicit change between the versions.

    So, i think that is something with CTE, views and derived tables(subquerys) with the tempdb.

    Does someone have any suggestions what could be?

Viewing 12 posts - 16 through 26 (of 26 total)

You must be logged in to reply to this topic. Login to reply