Weird query plan, should I flush my cache?

  • Yesterday a very complex query started behaving badly.

    After 2 hours I stopped the query, flushed the proc cache and ran the query again. It executed correctly in 5 seconds.

    Indexes and stats were up to date, so I guess that the plan in the cache was either calculated when data volumes were different or based upon outdated statistics.

    This database is published with merge replication and gets populated twice a day by an ETL process. There is no other activity on that database and it is used simply as a collector to distribute data to subscribers. All tables except 10 are download only, so I decided to append to' the ETL process a new step to perform index maintenance and statistics refresh.

    Now I wonder if I should also flush the plan cache in that step, to ensure that the plans are always calculated upon reliable statistics.

    Suggestions are welcome.

    -- Gianluca Sartori

  • If you are have decided to go with flushed the proc cache; make sure you flushed the correct and specific cache which is problamatic (as per your investigation). And make sure other cache are there so that you will not affect the perfomrnace.

    If you flushed all the cache data each time sql server executes the query has to copile the proc or sql; which might affect the performance.

    HTH

    ---------------------------------------------------
    "Thare are only 10 types of people in the world:
    Those who understand binary, and those who don't."

  • free_mascot (2/8/2011)


    If you are have decided to go with flushed the proc cache; make sure you flushed the correct and specific cache which is problamatic (as per your investigation). And make sure other cache are there so that you will not affect the perfomrnace.

    Thank you for your suggestion. I'm running SQL Server 2005, so the only option is running DBCC FREEPROCCACHE for a specific database. The other possibility is marking procedures for recompilation with sp_recompile, but, since I can't determine which procedures might be affected by the changes in statistics and data volumes, I would end up calling it for each procedure in the database, which is not any different from calling FREEPROCCACHE.

    If you flushed all the cache data each time sql server executes the query has to copile the proc or sql; which might affect the performance.

    Thanks for the heads up. It's something I considered to be acceptable in this kind of scenario, especially because there is no transactional activity on this database. The only processes that access and modify this data are ETL and merge replication processes.

    -- Gianluca Sartori

  • sp_recompile should be your preferred method of regenerating a QEP. In a production environment you should rarely if ever have a need to flush the whole cache.

  • If you know which procedure is causing problems, get that query's plan handle and use DBCC FREEPROCCACHE(plan_handle). That way you're not nuking the cache, but going for surgical strikes.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • You can get the plan_handle using this.

    SELECT P.plan_handle

    , usecounts

    , cacheobjtype

    , objtype

    , [text]

    ,query_plan as QEP

    , db_name(qp.dbid) as [Database]

    FROM sys.dm_exec_cached_plans p

    CROSS APPLY sys.dm_exec_sql_text(plan_handle)

    CROSS APPLY sys.dm_exec_query_plan (plan_handle) qp

    WHERE p.cacheobjtype = 'Compiled Plan'

    and text = 'Procedure'

    and p.objtype = 'Proc'

  • Grant Fritchey (2/9/2011)


    If you know which procedure is causing problems, get that query's plan handle and use DBCC FREEPROCCACHE(plan_handle). That way you're not nuking the cache, but going for surgical strikes.

    Thanks for you suggestion, Grant. Unfortunately, I'm still on SQL Server 2005 and I can't use DBCC FREEPROCCACHE with a specific plan handle. I'm bound to use DBCC FREEPROCCCHE or sp_recompile.

    On the other hand, I can't even predict which procedure will go nuts. I think this is due to the fact that we're still populating the database from scratch and things will be much more stable when the db will be filled with a typical data load.

    -- Gianluca Sartori

  • MysteryJimbo (2/9/2011)


    You can get the plan_handle using this.

    ...

    Thank you, Jimbo. I know how to retrieve the query plan, what I can't predict is whether it's going to be good or bad...

    -- Gianluca Sartori

  • Gianluca Sartori (2/9/2011)


    Grant Fritchey (2/9/2011)


    If you know which procedure is causing problems, get that query's plan handle and use DBCC FREEPROCCACHE(plan_handle). That way you're not nuking the cache, but going for surgical strikes.

    Thanks for you suggestion, Grant. Unfortunately, I'm still on SQL Server 2005 and I can't use DBCC FREEPROCCACHE with a specific plan handle. I'm bound to use DBCC FREEPROCCCHE or sp_recompile.

    On the other hand, I can't even predict which procedure will go nuts. I think this is due to the fact that we're still populating the database from scratch and things will be much more stable when the db will be filled with a typical data load.

    Darn old versions of SQL. Upgrade already!

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Being an ETL system, it sounds like all of your queries are for batch jobs. If that's true, forcing each query to recompile (as some of the others suggested) wouldn't cost that much time and you probably wouldn't run into this problem again.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Thank you, Jeff. I will try and keep you posted.

    -- Gianluca Sartori

  • I spent the last 2 days investigating the issue and I found something really interesting.

    BOL states that updating statistics causes queries to recompile:

    UPDATE STATISTICS:


    Updating statistics ensures that queries compile with up-to-date statistics. However, updating statistics causes queries to recompile. We recommend not updating statistics too frequently because there is a performance tradeoff between improving query plans and the time it takes to recompile queries. The specific tradeoffs depend on your application.

    OK, again, trust but verify.

    Here's how I tried to verify:

    -- CREATE A TEMP TABLE TO HOLD CACHED PLANS DATA

    CREATE TABLE #stats (

    DatabaseName nvarchar(128),

    ObjType nvarchar(16),

    CachedPlansBefore int,

    CachedPlansAfter int,

    )

    -- POPULATE TEMP TABLE WITH CACHED PLANS COUNTS BY DATABASE / OBJECT TYPE

    INSERT INTO #stats (DatabaseName, ObjType, CachedPlansBefore)

    SELECT ISNULL(db_name(dbid), '<NO DATABASE>') AS DatabaseName, objtype, COUNT(*) AS CachedPlans

    FROM sys.dm_exec_cached_plans AS A

    CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS B

    WHERE cacheobjtype = 'Compiled Plan'

    GROUP BY db_name(dbid), objtype

    -- UPDATE STATISTICS: it should also cause recompilation

    EXEC sp_updatestats 'RESAMPLE'

    -- REFRESH CACHED PLANS COUNTS

    UPDATE st

    SET CachedPlansAfter = ISNULL(CachedPlans,0)

    FROM #stats as st

    LEFT JOIN (

    SELECT ISNULL(db_name(dbid), '<NO DATABASE>') AS DatabaseName, objtype, COUNT(*) AS CachedPlans

    FROM sys.dm_exec_cached_plans AS A

    CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS B

    WHERE cacheobjtype = 'Compiled Plan'

    GROUP BY db_name(dbid), objtype

    ) AS upd

    ON upd.DatabaseName = st.DatabaseName

    AND upd.ObjType = st.ObjType

    -- SELECT OUT RESULTS

    SELECT *

    FROM #stats

    ORDER BY 1, 2

    After running this code, I saw NO CHANGE in cached plans counts.

    Am I missing something? Maybe invalidating plans doesn't mean deleting them from the cache?

    -- Gianluca Sartori

  • Gianluca Sartori (2/9/2011)


    Thank you, Jeff. I will try and keep you posted.

    Just a bit of feedback... I ran into a similar problem with and SCD2 trigger I was building. OPTION(RECOMPILE) fixed it just fine. I don't know if that may fix it for you, but it is another option.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply