SQL Profiler kills performance

  • Hi,

    I've inherited a number of SQL Servers that have all got some inherent performance problems. I've started looking into these by running a trace from one of the standard trace template.

    When I save the trace results to a file, they run quite happily and save the data with no issues.

    When I specify to save the data to a trace table on the SQL Server, it immediately slows the whole server down to the point where the mouse moves slowly, and when typing any commands in SSMS, you can see the command appear letter by letter on the screen.

    We're running SQL 2005 Standard (x64), 16Gb RAM, DB's are in Compatibility for SQL 2000. Drives are in a RAID 10 configuration.

    I originally thought that it could be a disk bottleneck, but it's a bit of a guess.

    Any suggestions?

  • Profiler will create a performance hit on the server. Saving the data to tables directly is the worst for that.

    The best bet on that kind of thing is running server-side traces instead of running in Profiler.

    You can get Profiler to generate a trace script for you, and then use that to create the trace without the GUI overhead, or you can look into sp_trace_create and the related system procs that create and modify traces. The procs give you better flexibility and control over the trace, but having Profiler do it for you is easier while you learn the ropes on the subject. (The usual difference between using a "wizard" and writing your own scripts/code.)

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Thanks GSquared

    I will try scripting the Trace off and running it that way.

    Any idea why it doesn't affect performance when saving to flat file, but does when going to the SQL DB?

  • Just because it's faster to save unstructured data in a striaght write to disk than it is to a database. You're dealing with all the overhead of SQL Server when you try to capture very large amounts of information (depending on the columns and events you have defined) to a table. And, I'm assuming, that's on the system that is being monitored, so resource contention is higher because of the other connections to that server.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Just to expand on what Grant wrote:

    SQL table storage requires writing to the transaction log, checking locks, resolving locks, writing to the table, marking the tran log as committed, et al.

    Writing to a flat file requires writing some binary data to an allocated disk space, and nothing else.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Ideally, you could save the trace data to a flat file on another drive not used by SQL.

  • Are you perchance doing statement level completed tracing (RPC or TSQL)?? Those can be DEVASTATINGLY HARD on the server if you have UDFs in plan, or cursors, triggers, etc.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply