Need to update stats when CPU is high for a particular Query

  • Hi Folks,

    I need some suggestions on doing this task. A query was catching a bad plan and keeping CPU and no of reads very high.

    Updating/defrag Stats and rebuilding indexes is solving this issue.

    So now i need to trigger this update stats job when this particular query CPU time is greater than 500ms. So need some suggestions on implementing this task.

    Thanks in advance

  • nagkarjun1 (4/15/2016)


    Hi Folks,

    I need some suggestions on doing this task. A query was catching a bad plan and keeping CPU and no of reads very high.

    Updating/defrag Stats and rebuilding indexes is solving this issue.

    So now i need to trigger this update stats job when this particular query CPU time is greater than 500ms. So need some suggestions on implementing this task.

    Thanks in advance

    Don't manage indexes and statistics based on CPU usage. You should do it based on table's activity. If indexes get too fragmented, you should consider redesigning them.

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • nagkarjun1 (4/15/2016)


    Hi Folks,

    I need some suggestions on doing this task. A query was catching a bad plan and keeping CPU and no of reads very high.

    Updating/defrag Stats and rebuilding indexes is solving this issue.

    So now i need to trigger this update stats job when this particular query CPU time is greater than 500ms. So need some suggestions on implementing this task.

    Thanks in advance

    1. Are you sure that "Updating Stats and rebuilding indexes" is fixing the problem. I suspect one of those two things is solving the problem, not both.

    Does just updating the stats resolve the problem? Why not try updating the stats 1st. If that works then you've identified the problem. If not then do the index rebuild. Bad stats will cause a bad query plan; badly fragmented indexes will cause the optimizer not to use an index that would otherwise improve performance.

    2. Are you stats set to update automatically? If not then your solution may be as simple as changing that simple setting.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • Thanks for the reply's.

    There was a heavy usage on the tables and manually updating stats is resolving this issue. Its happening frequently so i thinking to implement.

    So thinking to trigger this update stats job when the query CPU is increasing.

  • Alan.B (4/15/2016)


    nagkarjun1 (4/15/2016)


    Hi Folks,

    I need some suggestions on doing this task. A query was catching a bad plan and keeping CPU and no of reads very high.

    Updating/defrag Stats and rebuilding indexes is solving this issue.

    So now i need to trigger this update stats job when this particular query CPU time is greater than 500ms. So need some suggestions on implementing this task.

    Thanks in advance

    1. Are you sure that "Updating Stats and rebuilding indexes" is fixing the problem. I suspect one of those two things is solving the problem, not both.

    Does just updating the stats resolve the problem? Why not try updating the stats 1st. If that works then you've identified the problem. If not then do the index rebuild. Bad stats will cause a bad query plan; badly fragmented indexes will cause the optimizer not to use an index that would otherwise improve performance.

    Just a side note. Rebuilding indexes will update stats. That's why the suggestion is to update stats first and see if that fixes the problem. If an index is rebuilt, there's no need to update stats manually.

    2. Are you stats set to update automatically? If not then your solution may be as simple as changing that simple setting.

    Be aware that this option is not 100% reliable. The stats will be updated after an increment of about 20% + 500 rows. That means, a million rows table will be updated after it grows by 200,500 rows. Even with the AUTO_UPDATE_STATISTICS Option set to ON, it's good idea to update them manually.

    EDIT: Further reference on statistics maintenance https://www.pythian.com/blog/sql-server-statistics-maintenance-and-best-practices/

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Hey Luis - The Auto update stats is set to true already. But The CPU and no of reads for the bad plan is more and taking over all CPU to 100% which is triggering performance issues.

    I am Updating stats manually when ever the CPU of this bad plan is causing issues. Now i want update stats job to trigger automatically by a job when ever the Query CPU reaches 500ms.

  • nagkarjun1 (4/15/2016)


    Hey Luis - The Auto update stats is set to true already. But The CPU and no of reads for the bad plan is more and taking over all CPU to 100% which is triggering performance issues.

    I am Updating stats manually when ever the CPU of this bad plan is causing issues. Now i want update stats job to trigger automatically by a job when ever the Query CPU reaches 500ms.

    It may be better time spent to examine how the query is constructed and to, possibly, make it less reliant on stats being "perfect".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden (4/15/2016)


    nagkarjun1 (4/15/2016)


    Hey Luis - The Auto update stats is set to true already. But The CPU and no of reads for the bad plan is more and taking over all CPU to 100% which is triggering performance issues.

    I am Updating stats manually when ever the CPU of this bad plan is causing issues. Now i want update stats job to trigger automatically by a job when ever the Query CPU reaches 500ms.

    It may be better time spent to examine how the query is constructed and to, possibly, make it less reliant on stats being "perfect".

    My thoughts exactly. Performance tuning the query may take some work, but it'll probably be worth it in the end. On the other hand, it could be a simple fix.

    If you attempt to update statistics as a part of the procedure, you're going to have to wait until they're updated before continuing with your query. Yes, you'll might standardize the wait to a more normalized value, but my guess is that it'll be worse than it has to be. Besides, if you want to know how long it takes a query to run, doesn't that mean you have to run it to find out? 😛

    This brings us back to tuning the query itself.

  • Luis Cazares (4/15/2016)


    Alan.B (4/15/2016)


    nagkarjun1 (4/15/2016)


    Hi Folks,

    I need some suggestions on doing this task. A query was catching a bad plan and keeping CPU and no of reads very high.

    Updating/defrag Stats and rebuilding indexes is solving this issue.

    So now i need to trigger this update stats job when this particular query CPU time is greater than 500ms. So need some suggestions on implementing this task.

    Thanks in advance

    1. Are you sure that "Updating Stats and rebuilding indexes" is fixing the problem. I suspect one of those two things is solving the problem, not both.

    Does just updating the stats resolve the problem? Why not try updating the stats 1st. If that works then you've identified the problem. If not then do the index rebuild. Bad stats will cause a bad query plan; badly fragmented indexes will cause the optimizer not to use an index that would otherwise improve performance.

    Just a side note. Rebuilding indexes will update stats. That's why the suggestion is to update stats first and see if that fixes the problem. If an index is rebuilt, there's no need to update stats manually.

    2. Are you stats set to update automatically? If not then your solution may be as simple as changing that simple setting.

    Be aware that this option is not 100% reliable. The stats will be updated after an increment of about 20% + 500 rows. That means, a million rows table will be updated after it grows by 200,500 rows. Even with the AUTO_UPDATE_STATISTICS Option set to ON, it's good idea to update them manually.

    EDIT: Further reference on statistics maintenance https://www.pythian.com/blog/sql-server-statistics-maintenance-and-best-practices/

    Yep and yep. Your reply was not there when I started mine. I agree with everything you said. I generally recommend that that setting is on and that stats are also manually updated regularly (as part of nightly maintenance at a minimum.)

    For this issue there's also AUTO_UPDATE_STATISTICS_ASYNC which could be a solution (I haven't played around with it though which is why I didn't recommend it).

    The article you linked to is a good read BTW.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply