Instrumenting Code

  • Comments posted to this topic are about the item Instrumenting Code

  • It's a "Catch 22". Only Developers that actually concern themselves with things like performance, scalability, and documented code would ever take the time to add such instrumentation to the code. BUT, their code is the least likely to ever cause a problem that would require such instrumentation.

    It's for that reason that such things as Profiler and other tools have been made... for the Developers who are likely to write performance/resource challenged code who would also never take the time to deploy instrumentation laden code.

    Me? I write non-switchable instrumentation into my code to write to "performance log/status tables" all the time. It allows me to see even "gentle" degradation should it ever happen. It also gives me a method of tracking when and what a proc did so i can answer questions from management should a problem ever occur. It also works great at pay review time... "Look! This proc used to take 24 hours to sometimes fail. It now runs in 11 minutes and hasn't failed yet." They love it when that happens. It's also saved my butt a couple of times. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Where I currently work, we have a lot of dynamic code generated and run from within stored procedures (for good reasons, but don't ask--you really don't want to feel my pain). For those procedures we've added an @Debug paramter that allows the caller to merely "print" out the dynamically generated script, run the code "properly", or both (run and print). Without that, debugging was a nightmare.

    And, like Jeff, we log significant and time/performance critical actions for system reviews and CYA situations.

    Otherwise, what does DBMS_APPLICATION_INFO do that Profiler + System Monitor + dynamic management views + et. al. does not do? (That's a serious question, not sarcasm.) It's a lot of work to set up a Profiler/System Monitor trace to figure out what's going on, but you only do that when you need to--and over time you build up templates and libraries for recurring issues. Does it take as much time to prep DBMS_yadayada? And would you not have to do it for *every* piece of code? I'd guess you'd need to be very familiar with both systems to get past the fallback "it depends" answer.

    Philip

  • Some of the stored procs that my company uses to detect problems with application performance are actually the main cause of the performance problems.

    The data management views are quite good at pinning down performance problems in the DB layer but to be honest I would sooner leave instrumentation to the layer that calls the database.

  • I do what Jeff does. I know of no better way to proactively monitor the actual performance of my code without some form of instrumentation.

    :{> Andy

    Andy Leonard, Chief Data Engineer, Enterprise Data & Analytics

  • For production stored procedures, I usually have a section of code that can be switched in or out very quickly that provides debug information. The routine information I need are timings for each section of code within the stored procedure, and I imagine these are very similar to the way Jeff does it. I also have code that provides information each time the procedure was called and the parameters passed. The beauty with this is that one can then replay the procedure with the same parameters to try to replicate a bug or performance problem. If you combine all the information, you can do load simulations on your test system to try to replicate really nasty bugs.

    How do I switch this sort of data gathering on or off? Via settings in a 'global' table.

    Curiously, these two sources of information are generally enough to flush out most bugs without the need for any more sophisticated instrumentation.

    Best wishes,
    Phil Factor

  • I was tempted to reply "Yeah - What Phil said." But, in all honesty, I can't.

    Based on the client, instrumentation may or may not be worthwhile. I have had clients that didn't want instrumentation simply because they wouldn't know what to do with the information once I was finished and moved on to another project. I have also had clients with whom I've spent days explaining (in agonizing detail) not only how to interpret the logged information, but how to use tools such as SQL Server Profiler effectively.

    I recently did some analysis work on an "enterprise" application (a nationally used database that some of you may even be consuming today) for a large, Redmond, WA based firm through one of it's subsidiaries. I identified several missing "critical" indices, some candidates for table partitioning and some service broker modifications - strictly through the use of DMVs, Profiler and some common sense. While instrumentation would have been a "nice to have" - it didn't fit the bill for their requirements.

    Bottom line - if you're forced into a situation of performing analysis on a poorly behaved stored procedure, UDF or (heavens forbid) poorly designed database, instrumentation is a nice to have, but not necessary. DMVs and Profiler can give you 90% of what you need to analyze a problem and take effective corrective action. You just need to have someone on staff that knows what to do!

    -----------------------

    milimol

    Football is a mistake. It combines the two worst elements of American life. Violence and committee meetings.

    - George F. Will

  • There's always user-configurable events - something akin to the DebugOut in API calls...they sit there putting out info to anything that hooks in e.g. Profiler / DebugView.

    Used it before in not-so-tight sprocs...didn't feel right 🙂

    Personally I stick a section of non-destructive test code at the bottom of every sproc & trigger and optionally use Phil's debug parameter idea - right-click in SSMS & choose modify & you're mostly there.

    And profiler for identifying locks & block & performance hoggers.

    /*** Test Code

    BEGIN TRAN

    DECLARE @rc INT

    EXEC @rc = dbo.uspDoSomething

    @Debug = 0

    SELECT

    @rc AS [RETURN_CODE]

    , @@ERROR AS [@@ERROR]

    SELECT *

    FROM dbo.MyTable

    IF (@@TRANCOUNT >= 1)

    ROLLBACK TRAN

    ***/


    Regards
    Andy Davies

  • We're moving away from stored procedures in part because there is no such feature. There's simply no way of identifying problems with the current system.

    For example, a critical customer complained that a report they ran a while ago was really slow. Just how do I track that down with the tools provided? If things could be reasonably logged (eg every stored procedure executed, elapsed time, and success or failure to the event log), it would be at least possible.

    This is a consistent problem with Microsoft software. They are good about identifying common problems and providing easy ways to solve them. If they haven't anticipated your problem, however, you're screwed. Even when there are industry standard solutions, they fiddle with things to make it difficult or impossible to do your job. (For example, in Windows, just try to replace the word "Bush" in every filename with "Obama".)

    This is one of the reasons I avoid recommending Microsoft solutions to clients. Even when they have compelling products--C# is really good--I know that there will be things I need to do that I won't be able to because that's Microsoft's way to doing business. Sure, it's easy to detach a database. But can anyone tell me why the error message complains about "IF UPDATE" missing in a CREATE DATABASE statement if you put ON PRIMARY in the wrong spot?

    Alas, I'm stuck with terabytes of SQL2000 databases loaded with stored procedures. Paying tens of thousands of dollars to move to SQL2008 isn't going to make anyone's life easier.

  • We have an application where all SQL access is via a standard data access layer. I'm trying to get them to build in a usage of the context_info to set what code is calling the DB access layer. It's not instrumentation as such but it helps link the parts of the application to the related parts of the DB. This info is then available to other performance gathering tools we use such as Quests Performance Analysis Server. I assume it is available in profiler too but to be honest I've never had to look.

    ken.

  • Exactly what is the advantage of DBMS_APPLICATION_INFO over a profiler? What can DBMS_APPLICATION_INFO do that a profiler can't? Just seems a lot of work for nothing...

  • Mark wrote :

    For example, a critical customer complained that a report they ran a while ago was really slow. Just how do I track that down with the tools provided? If things could be reasonably logged (eg every stored procedure executed, elapsed time, and success or failure to the event log), it would be at least possible.

    Mark, there are three places you should look - look for a missing index with the DMVs, look at the performance of each statement in the stored procedure(s) generating the report with SQL Server profiler and (if your report is generated by SSRS) look at how long it takes to render the report. All of these items can be determined easily with the tools provided by Microsoft with the SQL Server product. One caveat - DMVs are only available in SQL Server 2005 and SQL Server 2008, but you could look at all joins (inner and outer) and look at the list of available indices for those tables to determine if there's a missing index, it's just a little more time consuming without DMVs.

    I'll be the first to say that Microsoft could make the job a little easier, but they do provide the tools necessary to do the job.

    There is one other possibility that folks seem to regularly overlook - the size of the transaction log. Large transaction logs can incur penalties for inserts and deletes. But, that's a matter of database hygiene that should already be properly handled if the DBA is doing his/her job peroperly.

  • I've been a database developer for about 20 years and am generally disappointed by the inability of programmers to understand simple debugging practices. I blame all those newfangled debugger things that make things easy.

    I've never used T-SQL debugging, which will no doubt have some screaming, 'Why?', but there isn't a good reason just that these things were not available when I learned my trade.

    I learned procedural SQL as PL/SQL and used DBMS_APPLICATION_INFO to inform a person debugging what was going on and where. It worked very well.

    We recently developed what I would consider to be a very large set of T-SQL procedures (in the 1000s of lines) to perform some complex processing. It was divided into about 10 distinct processes, each of which was called from the main procedure; I still miss PL/SQL packages, by the way. Some of our data was a bit 'unpredictable' and it was arriving from different sources, so there was big scope for issues in the middle of the code, like unexpected null values.

    In order to provide debugging and trace information, I had a debug flag, which created a temporary table into which debug lines were placed that could be queried at the end of the batch. This is very simple and it's easy to add more info as you track the problem down, plus it works anywhere. We have 11 'client' sites and it's great for that, because you don't need lots of tools running, just SSMS or the Query Analyser.

    Probably the biggest advantage is you can say what you are doing in processing terms, rather than starting from the standpoint of looking at what is happening technically. In my opinion, there an important difference.

    After reading some stuff about SQL Profiler, I've started to use this recently and it does help for some purposes, but a few PRINTs or SELECTs for feedback can be pretty useful too.

  • What an appropriate topic for me! I have been an Oracle DBA since 1982: worked on the first port from DEC VMS to Data General Eclipse in the Pentagon. I have only worked with SQL Server since 1998, so I admire and respect the people I personally know here.

    Last Friday I found 18 packages compiled with debug in production and immediately sent a notice that this was unacceptable... and I got full agreement from both local and remote development teams, and also from the SQL Server DBA.

    Then I got home and read this post - and I currently only read the weekly email... no longer involved in day-to-day activities here. I did not read the full thread until just now.

    I have never used or recommended the dbms_application_info package. I doubt that any of our developers even know about it.

    I came to work today thinking it might be time to change my career-long opinion on the subject; but, the general feeling here is that… I was right, always have been…

    No debugging in production.

    Thank-you,
    David Russell
    Any Cloud, Any Database, Oracle since 1982

  • I'm definitely a fan of instrumentation for debugging purposes. Quite a while ago I wrote something called TSQLAssert (http://datamanipulation.net/tsqlmacro/) for that very purpose... It's based on the assertion frameworks found in C and C++, and like them is built on top of a macro preprocessor (called TSQLMacro). These tools proved to be totally unpopular with most SQL Server people I've introduced them to, but I'm still quite happy with them 😎

    --
    Adam Machanic
    whoisactive

Viewing 15 posts - 1 through 15 (of 24 total)

You must be logged in to reply to this topic. Login to reply