Future Forecast for performance

  • Hello friends, How can we estimate how much time a particular sproc is supposed to take if the tables involved within that sproc gets 10 or 100 times bulkier ?

  • This was removed by the editor as SPAM

  • the only way to understand what could happen is to test it. I've set up tests where I ran a set of procs against a database at 1x, 10x, 100x data volume and for each set ran the tests from 1,2,5,10,20 simultaneous connections. With that kind of testing, it was possible to know exactly when procedures started to bind and why.

    Second best choice is projection based on baselines. So if you have a baseline for six months ago and one from today so that you can graph the change in behavior over that time, then you can project it into the future. This is somewhat inexact because data & data distribution can affect the stats which affects the execution plan which affects performance. You can't count on straight line performance projections.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • To support @grants post I would do some benchmarking to know exactly what future performance would be.

    I use an excellent tool that will allow you to exec the proc multiple time on multiple treads.

    http://www.datamanipulation.net/SQLQueryStress/

  • I read somewhere that we can perform the stress testing by playing with table's statistics. Does anyone know about that ?

  • Not exactly stress testing. Fiddling the statistics will make the optimiser produce execution plans as though the row count was different, it won't make the queries run as though there were more rows.

    Where that's useful is to do analysis of execution plans on a machine that has no data and have the optimiser think that there's production volumes. It's not useful for actual execution time testing.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • sql_lock (1/12/2011)


    To support @grants post I would do some benchmarking to know exactly what future performance would be.

    I use an excellent tool that will allow you to exec the proc multiple time on multiple treads.

    http://www.datamanipulation.net/SQLQueryStress/

    Just what I was looking for!

  • This was removed by the editor as SPAM

  • Gila, But i think that would be pretty good idea to do future forecasting for performance. Analyzing those execution plans can tell a lot of story.

    Any alternative ways to do that ?

  • sql_butterfly (1/13/2011)


    Gila, But i think that would be pretty good idea to do future forecasting for performance. Analyzing those execution plans can tell a lot of story.

    Any alternative ways to do that ?

    The problem is, there is not a 100% direct correlation between execution plan and a precise execution time. You can say that certain operations within a plan can cause queries to run longer, but how much longer? So, while it's useful for certain types of troubleshooting, it's not that useful for testing performance.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • To amplify a bit on what Grant wrote. It's very difficult to estimate the future performance of a query based on it's execution plan. A query may perform quite well when only 50 - 100 rows are returned and turn into a real dog on anything over 100 rows. You see this quite often in queries that use correlated sub-queries, especially if the sub-query has a triangular join in it (Jeff Moden has a great article here on triangular joins).

    Many queries have a 'tipping point'. Everything works just fine until you hit it and then performance goes into the toilet very quickly. I've seen the correlated sub-query/triangular join tipping point hit several systems I've worked on.

    A big tipping point for any query is when the recordset that satisfies the outer part of a join (the left side, if you will) will no longer fit into memory and has to be paged out. This also can happen quite suddenly.

    The best thing to do is to make sure your queries are tight and don't have non-optimum code in them. Then use DMV's to monitor wait states and such. Monitor memory usage, etc.

    Todd Fifield

  • I would like to add that additional tuning is often required as data volumes grow - and you can't really know that without actually testing with the larger volumes. Additional indexes might be helpful, existing indexes become unused, clustered key changes might help/hurt, refactoring the query, etc, etc.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply