Updating Stats

  • GilaMonster (11/10/2011)


    Start with the auto_stats events. See how often they really happen.

    For queries affected by poor stats, normal monitoring and look for queries with erratic durations. Investigate and see if stats updates fix them (this takes time). For those that you've identified, schedule more frequent stats updates.

    It looks like the auto stats event occurs on average about twice a minute (of course this was only a small trace, but should be at a time when volume is higher) with a duration of 3 microseconds. This is for progression of EventSubClass 2, 3, and 4 with asynchronous on. Updating stats takes about 7 minutes which we do daily in the early morning.

    EDIT: Before async was enabled, reports run really fast directly after statistics are updated, but after about 2 hours (2 million rows avg.) the queries slow down. She determined that it was a wait on the update statistics and the change we made has dramatically increased the performance now.

    Thanks,

    Jared

    Jared
    CE - Microsoft

  • Thanks a lot Gail. It is always nice to see your detailed response with much ease to understand.

    It is getting interested as I expected. Since read uncommitted isolation level could bring dirty reads, If I am not getting it wrong, then even after updating stats with full scan could lead to a bad execution plan?

  • Extremely, extremely unlikely that you'd get a bad plan from a dirty read during a stat recalc. Remember that your million row table is going to get condensed into a histogram with hundreds of rows. A handful of dirty reads is noise. There are scenarios you could construct that would produce a bad plan, but you'd have to do something like this:

    Begin Tran

    delete from t1 where date_col < '1/1/2011'

    --after the delete finishes, update stats in a different connection then rollback

    Rollback Tran

    go

    --query that might get a bad plan

    select ... from t1 inner join t2 on ...

    where date_col between '9/1/2010' and '12/31/2010'

    Pretty contrived example.

    Much more common is OLTP data that is inserted in ascending order, and stats being out of date for the largest values. See trace flags 2389 and 2390.

  • Usman Butt (11/10/2011)


    Since read uncommitted isolation level could bring dirty reads, If I am not getting it wrong, then even after updating stats with full scan could lead to a bad execution plan?

    As I said earlier, "Doesn't matter. Stats are an inaccurate approximation of the data anyway"

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 4 posts - 31 through 33 (of 33 total)

You must be logged in to reply to this topic. Login to reply