Times stats are different for a query

  • Hi all,

    I'd like to add new / tune existing indexes on a specific table in order to achieve some performance improvements.

    The problem is that the time stats returned varies each time I execute the query. For instance:

    CPU time = 0 ms, elapsed time = 21 ms.

    then

    CPU time = 0 ms, elapsed time = 115 ms.

    then

    CPU time = 16 ms, elapsed time = 18 ms.

    etc..

    So how do I compare the performance (CPU and elapsed time) of the same query if I make an index modification?

    Thanks.

    __________________________
    Allzu viel ist ungesund...

  • You'll never see exactly the same behavior. So, you tune in combination with looking at the scans and reads. Looking at both you'll start to see whether or not the changes you're making are having an impact.

    Also, it's not a way of measuring performance at all, but you should look at the execution plan to see how it's changing as you modify the code or indexes. That will also give you an indication if you're on the right track.

    Finally, I usually run a query three or four times and record the lowest measure and the average. That gives you something to compare.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Thanks.

    So when do do you usually look at the time stats if it is "not a way of measuring performance at all" ? Thank you.

    __________________________
    Allzu viel ist ungesund...

  • They are an *indicator* as are Reads Writes and Cpu usage in a trace session.

    But those durations are quite short , so

    A) is the quantity of data the expected quatity of a live system ?

    B) How do the plan(s) look ? Resonable ?

    C) Are estimated row counts in line with actual row counts.

    D) Are you using covering indexes ?



    Clear Sky SQL
    My Blog[/url]

  • Mr. Holio (10/3/2011)


    Thanks.

    So when do do you usually look at the time stats if it is "not a way of measuring performance at all" ? Thank you.

    I probably wasn't sufficiently clear. I meant that an execution plan is not a way of "measuring" performance. Not the time it takes to run a query. That is an indication of performance, absolutely.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Dave Ballantyne (10/3/2011)


    They are an *indicator* as are Reads Writes and Cpu usage in a trace session.

    But those durations are quite short , so

    A) is the quantity of data the expected quatity of a live system ?

    B) How do the plan(s) look ? Resonable ?

    C) Are estimated row counts in line with actual row counts.

    D) Are you using covering indexes ?

    Can't remember what I did but it was very simple something like

    SELECT * FROM MyTable

    WHERE CompanyId = 1 and ProductId = 2 and UserId = 'Tom'

    There is only a clustered index on a column(PK) so I was thinking adding a non-clustered index (on CompanyId, ProductId and UserId) would reduce the the reads and the time. However I see no improvements in time perhaps because the table is very small (20K records)?

    Instead of doing a clustered index scan...

    __________________________
    Allzu viel ist ungesund...

  • First off , look at the execution plan to see what has happened rather than guessing.

    If you are not upto speed , then i would recommend Grants free , yes amazingly free, ebook at http://qa.sqlservercentral.com/articles/books/65831/?utm_content=Grant080623 , it really is *the* reference book for plans.

    What you are seeing could be for any number of different reasons , my guess here is that the nonclustered index is going to return a large percentage of the data , that means that the cost of doing the clustered index scan is cheaper that navigating the NC index and doing a key lookup for the rows that are found.

    {edit}

    Yes 20k really is to small an amount of rows , bulk it up to at least 1million to get a half decent idea of performance costs and comparisons of.



    Clear Sky SQL
    My Blog[/url]

  • Dave Ballantyne (10/3/2011)


    First off , look at the execution plan to see what has happened rather than guessing.

    If you are not upto speed , then i would recommend Grants free , yes amazingly free, ebook at http://qa.sqlservercentral.com/articles/books/65831/?utm_content=Grant080623 , it really is *the* reference book for plans.

    What you are seeing could be for any number of different reasons , my guess here is that the nonclustered index is going to return a large percentage of the data , that means that the cost of doing the clustered index scan is cheaper that navigating the NC index and doing a key lookup for the rows that are found.

    {edit}

    Yes 20k really is to small an amount of rows , bulk it up to at least 1million to get a half decent idea of performance costs and comparisons of.

    The operation it was doing was a clustered index scan as mentioned earlier, was not my guess. Adding a non-clustered index seemed to help a little as the query returns only one row.

    Thanks for the valuable advices, will check out that book too.

    __________________________
    Allzu viel ist ungesund...

  • Sounds like it *could* have been using an old cached plan.

    After you created the NC index did you flush the cache (DBCC FREEPROCACHE) ?



    Clear Sky SQL
    My Blog[/url]

  • No, I did not.. :unsure:

    __________________________
    Allzu viel ist ungesund...

  • Ok ......

    Sounds like a good enough reason then.



    Clear Sky SQL
    My Blog[/url]

  • Mr. Holio (9/30/2011)


    Hi all,

    I'd like to add new / tune existing indexes on a specific table in order to achieve some performance improvements.

    The problem is that the time stats returned varies each time I execute the query. For instance:

    CPU time = 0 ms, elapsed time = 21 ms.

    then

    CPU time = 0 ms, elapsed time = 115 ms.

    then

    CPU time = 16 ms, elapsed time = 18 ms.

    etc..

    So how do I compare the performance (CPU and elapsed time) of the same query if I make an index modification?

    Thanks.

    Are you testing this query in an isolated development or testing environment where you can ensure that there are no queries running on other connections, SQL Agent jobs running, etc.? If you haven't controlled the activity on SQL Server from test run to test run, you can't be sure that differences in elapsed time aren't caused by something external to the query being tested (e.g., I/O operations or waits necessitated by the processing of another query).

    Jason Wolfkill

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply