impact of "permanent" sql trace on production server?

  • Dear all,

    I would like to setup a trace on my production server to track long running queries.

    Any idea what would the impact be to run a trace looking for any query lasting more than 15 seconds (for instance).

    I would expect very few queries to match this criteria and would setup the trace through sp_trace_... stored procedures (no profiler).

    If that works, I would even be tempted to setup a stored proc with "Startup" option to do this every time we restart our server.

    Does that sound like a stupid idea? :laugh:

    Eric

  • I guess I can't say for sure what impact that will have on the server, but I do know that when I tried to run a trace to look for locking issues, I brought the server to its knees.

    What I do now is run profiler on a seperate box and log the long running queries to table.

    If you do decide to try the trace, I would try it in a dev or test environment first before putting it in production.

  • Depends on your server and the activity. If you do set up a permanent trace there are a few things to be careful of.

    1 - Don't trace the very high volume events (lock aquired, lock release, statement starting, ...) as you will easily overwhelm your server

    2 - Make sure the trace is writing to a high-speed disk that does NOT also hold data or log files for your DBs.

    3 - Make sure the drive has sufficient space, and that you have an automated plan in place for archiving/deleting older files

    4 - Trace the minimum number of columns possible.

    As an alternative, perhaps you could poll the plan cache once or twice a day, looking for queries that have a max execution time of more than 15 seconds.

    SELECT execution_count, total_worker_time, total_physical_reads,

    total_logical_reads, total_elapsed_time, max_elapsed_time,

    creation_time, st.text

    FROM sys.dm_exec_query_stats qs INNER JOIN sys.dm_exec_cached_plans cp ON qs.plan_handle = cp.plan_handle

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st

    WHERE max_elapsed_time > 15000000 -- 15 seconds. elapsed time is in microseconds

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Yes, I was planning to select very few events and columns as well as putting a filter on duration.

    I also like the idea of looking for long running queries in cache from time to time

    Thanks 🙂

  • Mark,

    I read that running profiler was not a good idea, even on a separate box.

    I understand you would have been better off "doing it yourself" through the trace stored procedures (sp_trace_create, etc).

    I could supply some example if you are interested.

    Eric

  • Agreed. Don't use the profiler GUI when tracing against production servers. The server-side trace procs are a much better idea.

    Basically, you don't want to be sending the trace info across the network if you can help it

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • emamet (11/9/2007)


    Yes, I was planning to select very few events and columns as well as putting a filter on duration.

    When you set up your filter, remember that durations are measures in microseconds in SQL 2005

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Eric

    I guess there are mixed opinions of this out there, because another DBA told me this was the preferred method and it seems to work well for me.

    That said though there's alwasy something new to learn and I would certainly be interested in seeing some examples if there is a better way.

    Mark

  • Mark,

    My "source" is Itzik Ben-Gan's (highly recommended) "Inside Sql Server 2005: T-Sql Querying" book and he seemed pretty convinced (and convincing).

    I'm only repeating... 🙂

  • I haven't gotten through that book yet (been writing too many essays for school) 🙂

    I'll have to get back into it and see what it says.

    Thanks for the info.

    Mark

  • I set up a nice little system of traces on my servers and have noticed at best a negligable performance hit.

    As an example I have one trace that logs logins, outs and failures.

    I have an SP that starts a 1 hour trace to a file, once the trace is created it then reads in the trace from the previous hour, summarizes it and stores it in a table, then I delete the old file.

    Then every couple of months I drop off some of the older data.

    I definatly agree though its better to run the trace from an SP rather than the GUI. If for no other reason than you need to keep a connection open to your server or some other box that is running the GUI.

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • Mark Manis (11/9/2007)


    I guess there are mixed opinions of this out there, because another DBA told me this was the preferred method and it seems to work well for me.

    There's a number of reasons you don't want to use the profiler front end on a busy prod server.

    Using the gui requires sending all the profile events across the network. If your network is busy this can slow the server down, as it has to queue up all the events, along with all the normal network traffic.

    In addition, when using the gui, the server may drop events if the queue of events to send gets too long. This can result in you missing important information.

    I've seen reports on the impact of the traces on a server and the lowest impact comes from server-side trace (using the sp_trace procs) writing to a fast drive on the server. Next is profiler gui on a remote machine writing to a drive. Finally with the highest impact is profiler gui writing into a table. Especially if the table is on the server that you're profiling.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • We run server-side traces on all of our SQL boxes for auditing purposes, and I am amazed at how little the systems are impacted. They run 24x7 on SQL 2000 & 2005 and some are very busy servers. I was very hesitant to do this, but it was not my decision to make. I would recommend lots of testing first before doing the same though.

  • As long as your filters are "narrow" enough and you are writting to an independent drive you should be fine!


    * Noel

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply