Twenty tips to write a good stored procedure

  • Ion Freeman (8/10/2009)


    Thanks, Gail. You saved me a lot of time 😉

    Pleasure. What exactly did I save you time in regards to?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • timothyawiseman (8/10/2009)


    I am not clear on if temp tables generate better execution plans than table variables. I have not formally tested it, but intuitively I would expect the opposite. Eventually I will find time to test it, but if you have testing (or references) I would be very interested in seeing the results.

    They usually do and the reason is statistics. Temp tables can have column statistics, table variables cannot. The optimiser uses statistics to estimate the number of rows that various parts of the query will process. Without stats, it estimates 1 row. If the table variable has close to 1 row in it, all is well. If it has lots of rows, the query optimiser's estimates are off so it's costing will be off and the plan it thinks is optimal may be very far from optimal.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Lynn Pettis (8/10/2009)


    So, are you volunteering to be part of a forced peer review group for articles? Then, the more important question, how long will stay with it if you did volunteer? I know I don't have the time to volunteer for such a group. Between work and life, I barely have time for things I want to do.

    Yes, I would volunteer to be part of a peer review group, and if someone were to send me an article asking for comments unsolicited I would offer to send back my comments (no gauruntee as to the accuracy, usefulness, or timeliness of the comments and I may quickly put limits on it if I were to get deluged, but for now....).

    Keep in mind most of us talking in this thread read the article and posted often fairly lengthy comments. If one of us had done that for the author prior to its publication instead of after the fact he would have come out looking better, the article would be better, and perhaps some inexperienced read would be spared reading through 10 pages of comments to figure out what was good and what was not in the article. In fact, if I may beg your forgiveness for being direct, you commented several times in this thread. If you had received the article prepublication, you probably would have spent less time giving good points directly to the author (and you have made some good points) then you did in the more general discussion.

    Just to be clear, I would like to thank Mr Chakraborty for the article. It is clear he put time into it and much of the advice is good, but this is one very clear cut case where a little peer review ahead of time would have helped out. And I again would be happy to provide it to at least a limited extent as long as I were to get my submissions peer reviewed ahead of time in exchange. I certainly don't have time to do it for a lot of articles, but I could fit in one a week easily, especially since I would have read them and commented on many of them after the fact anyway.

    ---
    Timothy A Wiseman
    SQL Blog: http://timothyawiseman.wordpress.com/

  • GilaMonster (8/10/2009)


    timothyawiseman (8/10/2009)


    I am not clear on if temp tables generate better execution plans than table variables. I have not formally tested it, but intuitively I would expect the opposite. Eventually I will find time to test it, but if you have testing (or references) I would be very interested in seeing the results.

    They usually do and the reason is statistics. Temp tables can have column statistics, table variables cannot. The optimiser uses statistics to estimate the number of rows that various parts of the query will process. Without stats, it estimates 1 row. If the table variable has close to 1 row in it, all is well. If it has lots of rows, the query optimiser's estimates are off so it's costing will be off and the plan it thinks is optimal may be very far from optimal.

    That makes sense. Is it safe to say then that for small data sets a table variable will be more effecient and for large datasets a temp table is faster?

    ---
    Timothy A Wiseman
    SQL Blog: http://timothyawiseman.wordpress.com/

  • timothyawiseman (8/10/2009)


    Off the cuff, one thought might be to circulate an article through a few volunteers prior to posting. They could send their comments to the author and send SSC an "approved" or "requires modification" rating. It would certainly take a substantial amount of initial set up, but once set up it could be mostly automated and require only limited intervention from our dear editors here.

    That has been discussed before here. I don't recall if there was a consensus, but a common concern was lack of time. It's all well and good to say 'sure, I'll help edit', but when you're looking at an article a day getting published, it ends up been a lot of work to do in the limited spare time that we all have.

    I've had a couple articles posted elsewhere that went through an editing process, For one of them, it went back and forth about 4 times in the course of a week before we were happy with it. It was for a site that publishes far less that SSC does and my one article took a significant portion of the editor's time for 3 days.

    The other concern is subject matter knowledge. I'm in no way competent to edit an article on data mining or MDX as a quick example.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • timothyawiseman (8/10/2009)


    GilaMonster (8/10/2009)


    timothyawiseman (8/10/2009)


    I am not clear on if temp tables generate better execution plans than table variables. I have not formally tested it, but intuitively I would expect the opposite. Eventually I will find time to test it, but if you have testing (or references) I would be very interested in seeing the results.

    They usually do and the reason is statistics. Temp tables can have column statistics, table variables cannot. The optimiser uses statistics to estimate the number of rows that various parts of the query will process. Without stats, it estimates 1 row. If the table variable has close to 1 row in it, all is well. If it has lots of rows, the query optimiser's estimates are off so it's costing will be off and the plan it thinks is optimal may be very far from optimal.

    That makes sense. Is it safe to say then that for small data sets a table variable will be more effecient and for large datasets a temp table is faster?

    Maybe. 😀

    This is one of those areas where you need to test both ways and see what works best.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I've seen a lot of good debate today. The problem here, it seems, is the rating system. The displayed number of stars is viewed as authoritative, but in practical terms is arrived at by mere democratic vote. (The sheep outnumber the shepherds.) I think everyone would be happy if they could reliably expect that bad articles show 1 star and good articles show 5 stars.

    Can the displayed rating be *weighted* according to the experience level of the raters?

  • steve_melchert (8/10/2009)


    I've seen a lot of good debate today. The problem here, it seems, is the rating system. The displayed number of stars is viewed as authoritative, but in practical terms is arrived at by mere democratic vote. (The sheep outnumber the shepherds.) I think everyone would be happy if they could reliably expect that bad articles show 1 star and good articles show 5 stars.

    Can the displayed rating be *weighted* according to the experience level of the raters?

    Oh now that would be interesting... say Jeff, Gsquared, Gila, Lynn, etc get a weighting factor of 10/100/1000 for whatever star they rate things at? But who watches the watchers? Then we have a cabal where if they like each other's articles then nobody can stop them.

    Granted, I wouldn't expect this from any of them... but I can see it being raised.



    --Mark Tassin
    MCITP - SQL Server DBA
    Proud member of the Anti-RBAR alliance.
    For help with Performance click this link[/url]
    For tips on how to post your problems[/url]

  • GilaMonster (8/10/2009)


    That has been discussed before here. I don't recall if there was a consensus, but a common concern was lack of time. It's all well and good to say 'sure, I'll help edit', but when you're looking at an article a day getting published, it ends up been a lot of work to do in the limited spare time that we all have.

    I've had a couple articles posted elsewhere that went through an editing process, For one of them, it went back and forth about 4 times in the course of a week before we were happy with it. It was for a site that publishes far less that SSC does and my one article took a significant portion of the editor's time for 3 days.

    The other concern is subject matter knowledge. I'm in no way competent to edit an article on data mining or MDX as a quick example.

    You as always have excellent points, but they are both addressable.

    As to the time, it is very valid. I certainly would quit quickly if asked to edit an article a day as a volunteer. But compare that with the amount of time spent in the comments. These articles could be much better as articles (with all the benefits that brings with it for the author, the site, and the community) if a little small part of that time had been spent with the author ahead of time. If there were enough volunteers (a big if, I know) I suspect we could pass each article through 2 reviewers prior to publication and ask each reviewer to only look at one a week or one every other week.

    As to the second, one way of addressing it is to have each volunteer identify the types of articles they can handle and then use the tags already in place to filter what they are sent.

    As I said before, I think the site is excellent as it is, but I do think insituting a peer review if done properly could make it much better.

    ---
    Timothy A Wiseman
    SQL Blog: http://timothyawiseman.wordpress.com/

  • GilaMonster (8/10/2009)


    Ion Freeman (8/10/2009)


    Thanks, Gail. You saved me a lot of time 😉

    Pleasure. What exactly did I save you time in regards to?

    I had been going to respond to this article myself. I'm seeing a lot of discussion on how we can keep this sort of thing out of the information stream, and maybe we could use a wikipedia model, a 'this content has problems noted in the comments' tag.

  • mtassin (8/10/2009)


    But who watches the watchers? Then we have a cabal where if they like each other's articles then nobody can stop them.

    And who determines who should be in the group? If it's forum ranking alone, there are several people (who shall remain unnamed) with over 1000 points and (based on the questions they ask) very little knowledge of SQL.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/10/2009)


    mtassin (8/10/2009)

    And who determines who should be in the group? If it's forum ranking alone, there are several people (who shall remain unnamed) with over 1000 points and (based on the questions they ask) very little knowledge of SQL.

    Very true... I'd like for the record to notice I didn't put myself in the group...



    --Mark Tassin
    MCITP - SQL Server DBA
    Proud member of the Anti-RBAR alliance.
    For help with Performance click this link[/url]
    For tips on how to post your problems[/url]

  • GilaMonster (8/10/2009)


    Wanderer (8/10/2009)


    anyways: 1 useful piece of information I got from a Kalen Delaney course. sp_ is NOT Microsoft's "stored procedures". It actually means special, and works for tables as well. try creating a table SP_table in master, then selecting from the table from a user database.

    Now that's interesting. Didn't realise it worked for anything other than procedures. That certainly has possibilities.....

    Yes, it does have some possibilities, as long as you decide to allow modification of MASTER. Bear in mind, (I know that you know this), rebuilding MASTER or MASTER restores could lose your data, so keep your code and data backed up....

  • mtassin (8/10/2009)


    GilaMonster (8/10/2009)


    mtassin (8/10/2009)

    And who determines who should be in the group? If it's forum ranking alone, there are several people (who shall remain unnamed) with over 1000 points and (based on the questions they ask) very little knowledge of SQL.

    Very true... I'd like for the record to notice I didn't put myself in the group...

    I'm sure she wasn't hinting at you. I am pretty sure I have an idea or two who she was thinking about as she wrote this particular post.

  • Love it when folks incorrectly post about stuff that may be deprecated. After all that info is so hard to come by:

    http://tinyurl.com/kr7tvb (lmgtfy link) 🙂

    Eddie Wuerch
    MCM: SQL

Viewing 15 posts - 106 through 120 (of 244 total)

You must be logged in to reply to this topic. Login to reply