Profiler and execution plan giving different results

  • Hi All,

    I'm stuck in an interesting scenario.

    I've a procedure and I'm trying to optimize it, now below is what I did.

    I simply copied the code from the procedure and added selects between every select, update, delete and any other proc or udf we're calling into this proc. I've also started a trace for this spid and I'm capturing below two events

    SP: StmtCompleted

    SQL: StmtCompleted

    With text, starttime, endtime, duration, cpu, reads etc.

    And I've also enable to show execution plan in SSMS, now when I ran this as a batch from SSMS, I'm seeing a update statement in the execution plan section and it says query cost in relative to batch as 42% but when I go back to the profiler trace data I'm seeing this update has taken 0 ms of cpu time and hardly 20 reads and 0 writes and on the other hand I've few selects and inserts into temp variables which are taking over 400 ms of cpu time and over 20000 reads.

    the update statement I'm referring above belongs to another proc which we're calling within this proc

    Now what is puzzling me is that when I see the same select or insert into execution plan section it says 10% and 12% or the total batch cost.

    I'm not sure if I'm misinterpreting something.

    Can anyone help me in understanding this?

  • Hi ,

    The query cost relative to batch is based upon the estimated cost, not the actual.

    You mention UDF's, estimated cost of UDF's are 0.

    That will be the difference.



    Clear Sky SQL
    My Blog[/url]

  • Dave Ballantyne (9/27/2011)


    Hi ,

    The query cost relative to batch is based upon the estimated cost, not the actual.

    You mention UDF's, estimated cost of UDF's are 0.

    That will be the difference.

    I know that but if a statement which is showing as 42% of the batch cost and this figure is the largest..... none of the other statement has taken more than 12% of the batch cost.... Now this is as per execution plan...

    On the contrary... when I check the profiler trace output for the same update statement it's taking 0ms cpu time and 20 reads.....

    And my question is how a part of a batch which is taking 42% of the total batch cost.... is taking 0ms of cpu and 20 reads.... whereas I'm having other queries present in that batch taking 20000+ reads and 400+ms of cpu time....

  • The costs are estimates. They are generated at compile time based on estimated row counts and there are many, many ways for them to be wrong (stale statistics, parameter sniffing, table variables, user defined functions, etc, etc.)

    Something in your queries, probably those table variables are causing incorrect estimations and as a result, incorrect query costs.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (9/27/2011)


    The costs are estimates. They are generated at compile time based on estimated row counts and there are many, many ways for them to be wrong (stale statistics, parameter sniffing, table variables, user defined functions, etc, etc.)

    Something in your queries, probably those table variables are causing incorrect estimations and as a result, incorrect query costs.

    This makes a bit sense.... Now I'm testing on my test box... and if I rebuild all indexes of the tables involved and update statistics.... Can I expect to get a better plan.... or I just move ahead and try tuning those queries with max cpu & reads as per profiler trace???

  • Ramji29 (9/27/2011)


    This makes a bit sense.... Now I'm testing on my test box... and if I rebuild all indexes of the tables involved and update statistics.... Can I expect to get a better plan.... or I just move ahead and try tuning those queries with max cpu & reads as per profiler trace???

    Probably not, not while you still have table variables in use. They will cause mis-estimations, incorrect costings and probably sub-optimal plans.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (9/27/2011)


    Ramji29 (9/27/2011)


    This makes a bit sense.... Now I'm testing on my test box... and if I rebuild all indexes of the tables involved and update statistics.... Can I expect to get a better plan.... or I just move ahead and try tuning those queries with max cpu & reads as per profiler trace???

    Probably not, not while you still have table variables in use. They will cause mis-estimations, incorrect costings and probably sub-optimal plans.

    Then is there any way to handle this???

  • Don't use table variables unless you're using them in a way that doesn't affect the costings (insert into @var and straight select * from @var). Joins, filters, aggregations, etc are all going to mess up the estimation and the estimation errors multiply through the plan.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (9/27/2011)


    Don't use table variables unless you're using them in a way that doesn't affect the costings (insert into @var and straight select * from @var). Joins, filters, aggregations, etc are all going to mess up the estimation and the estimation errors multiply through the plan.

    Okey.... But atleast for the first time I need to select from tables using joins to insert into the variable and there onwarss I'm using the table variable to insert into other table variables.

  • As I said, if you want more accurate estimations, don't use table variables. The insert probably has a fairly accurate cost, but using the table variable to insert into other table variables is going to have an inaccurate estimation and probably a sub-optimal plan.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply