Synthetic Transactions - Input please

  • Guys, what is a good query to run on a database that qualifies as a synthetic transaction? We simply wants to the elapsed time measured from start to end of the execution of the query. One huge consideration for this is that we do not control the database so that means the query should be generic and still measures db performance.


    "The way to get things done is not to mind who gets the credit for doing them." - Benjamin Howett

  • to achieve exactly what? Anything which uses a client not on the server is subject to fluctuations in connection speed/bandwidth. For example in a rdp session on the server I can run a query in say 3 seconds, however a client in China finds the same query ( which returns the result set in a thick client ) takes nearly 30 seconds. Which is the performance?

    For benchmarking we took a given database and a set of operations and timed a series of runs. To compare performance 18 months down the line we'll restore the database and run the same tests.

    We can also run some against the current database, but if the database is 4 times bigger and a backup takes 3 times longer can we actually come to any conclusion?

    Think you need to be a bit clearer on what you want to achieve for people to be able to help you.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • :Whistling:


    [font="Times New Roman"]rfr.ferrari[/font]
    DBA - SQL Server 2008
    MCITP | MCTS

    remember is live or suffer twice!
    the period you fastest growing is the most difficult period of your life!
  • Sorry for not being so clear about the objective. We just want to measure the disk performance. I know this is a very crude and inaccurate means but right now we couldn't think of anything that does not involve Windows Performance Counter. The server folks told us not to use WPC because based from the hardware architecture, the result would be very misleading. We have to dig to the instance level. What we have in mind is simply execute a query which looks like the one below. To minimize trips to server, the query also measures the CPU utilization. Again, this might not be the best query to execute but the fact that we don't control client db's precludes us from writing the best synthetic transaction.

    DECLARE @start DATETIME, @end DATETIME

    SET @start = GETDATE()

    <query here>

    SET @end = GETDATE()

    SELECT @start StartDT, @end EndDT

    Thanks again guys.


    "The way to get things done is not to mind who gets the credit for doing them." - Benjamin Howett

  • OK - tricky. There's a number of posts on my blog about benchmarking and testing storage and the basic code/tests I use are here http://www.grumpyolddba.co.uk/infrastructure/sqlbenchmark.htm

    Essentially we did the sequential load and random io tests. We timed backups, restores , full index rebuilds and some specific processes from our application. These were all set against a set database and run many hundreds of times 24 hours a day with various configurations - too complicated to explian in a forum post exactly why we did what we did, although you might get an idea from my blog posts!!

    As well as timing the tests we also ran up various performance counters and produced real time graphs showing latency( now called service time ) and iops at 5 sec granularity. We also measured hba throughput.

    Essentially though you will find all sorts of reasons why what you actually accept might not be ideal, try explaining latency, iops and bandwidth to non technical people.

    A slightly different approach is to have an automated client task which you monitor, this is meant to show application response.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply