PERFORMANCE QUESTION

  • HI ,

    I am new to sql server 2005 and I had 2 3 interviews.

    Where i got this question repated everytime and i have searched google andeverywhere i didnt got the answer and am sure i will get it here

    the quest is

    1}You created a query/store proc and the output result was in 1 2 second's everything was fine but when you came next day the same code was taking 1hr what could be the reason ??????

    2}You created a store proc and for you its taking 1 2 seconds whereas for the other user's the same code is taking 1hr what could be the reason???

    So please do answer expers

    Thanks a lot

  • Since they're interview questions, I'm not going into great detail. You can research more yourself

    irfan.sql (12/4/2007)


    1}You created a query/store proc and the output result was in 1 2 second's everything was fine but when you came next day the same code was taking 1hr what could be the reason ??????

    Could be out of date statistics, could be parameter sniffing[/url]

    2}You created a store proc and for you its taking 1 2 seconds whereas for the other user's the same code is taking 1hr what could be the reason???

    Could be related to object ownership. Could be parameter sniffing. Otherwise, not sure.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • My suggestions:-

    1) check the index in the table i.e your query should perform index seek and scan while executing the query.

  • Ananth (12/5/2007)


    My suggestions:-

    1) check the index in the table i.e your query should perform index seek and scan while executing the query.

    Seeks preferably, as few scans as possible.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I would answer to both question - possible blocking problem, out of date statistics or indexes

  • Irfan,

    Might be lock acquired on the table.

    Say for example,

    Yesterday Scenario:

    ---------------------------------------------

    Table Name : Emp

    your query :

    select Eno,Ename from Emp

    Table status:

    Assume,Nobody done any operations(insertion,updation)on this table.

    Assume it gave you the result within 2 seconds.

    Today's Scenario:

    ---------------------------------------------

    Table Name : Emp

    your query :

    select Eno,Ename from Emp

    Table status:

    Assume,Some users do updation on this table,then your query would take some time for processing due to Lock contention.

    karthik

  • actually the answer and solution I'd give , having suffered from this problem, is that the result sets returned by this proc vary so much that the cached plan is not optimal for all procedure calls. The solution is to add a plan guide which recompiles the procedure plan.( for sql 2005 ) for 2000 you'd need to chnage the proc or add a recompile hint in the procedure its self

    Just to qualify this in simple terms, your query selects from the PK ( clustered ) the query is essentially select blah from table where key >= param. If your table is 300 million rows and the parameter is such that it may bring back a single row or a few rows ( <5% say ) then you'll get a seek, however if the result set is much larger you get a scan, whichever way you go if your plan is for a seek and you need a scan performance will be bad, if the plan is for a scan then all your seeks will be bad. This is really very basic stuff and if I were interviewing a DBA this is the answer I'd be looking for, not the stats answer. ( database auto stats is supposed to avoid this type of issue ).

    It's highly unlikely the stats would be out of date that much from a day, unless the query is against a staging table or one which is emptied and reloaded each night. ( possible answer 2 )

    parameter sniffing is likely to give consistant bad performance from the word go.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • If you would like to find out more about plan guides goto

    http://technet.microsoft.com/en-us/library/ms190417.aspx

    in addition I would like to add that index fragmentation may also add to your problems if for example the scan is of the ordered cluster index scan type.

    An ordered Index scan is a full scan of the leaf level of the clustered index following the linked list;the performance of an ordered scan will depend on the fragmentation level of the index.

    CodeMinkey

  • I sometimes like to play devil's advocate, especially with some of the answers as I often feel many sql myths are born this way!!

    I have never ever improved or changed the performance of a database query by updating stats, issuing a full stat scan or rebuilding indexes in a database where there have been regular daily index rebuilds and statistic updates AND auto update stats is turned on. Database sizes a few mb to 1.5TB.

    So let's think before we post, rather than making lots of suggestions which let's be honest are posted so often it's not true - how about posting when you've actually had real experience of a problem and found a solution.

    Now to the stats - I have found, especially in third party application databases and staging/DW/BI databases issues with stats. loading and reloading entire tables, never actually having any maint plans to optimise the database, turning off auto create and auto update stats - these types of situations may lead to performance improvments, however in nearly all cases this would not happen overnight except perhaps where a table is being loaded/reloaded with data every night and the volume of data has been changed such that the plans would be way off.

    There's another point too here, generally if you rebuild the clustered index on a table all the plans that refer to that table that are in cache get marked for recompilation.

    As you might guess you probably wouldn't want to be interviewed by me for a DBA job! BUT it was an excellent question to post and the answer is just so much more than might be first imagined - and this is why database performance tuning is such a black art. Remember plan guides and recompilation hints don't exist in sql 2000.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • we haven't answered question 2 fully I note.

    This is more tricky and I'm not sure it's actually a very fair question , I have some ideas as it's happened to me a number of times!!

    1. set options issued by the connection force the query to go bad.

    2. client machine problems, I think someone mentioned nic settings, typically client card gets set down to 10m and/or half duplex.

    3. problems from the client through a middle tier which you as dba probably wouldn't encounter

    4. the user sends different paramters to those you tested with.

    5. the user is the other side of the word connecting by dial up modem!

    what I'd actually ask is if we're talking of proc execution time or the time to return results to the client.

    Hope this helps.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • colin Leversuch-Roberts (12/21/2007)


    I have never ever improved or changed the performance of a database query by updating stats, issuing a full stat scan or rebuilding indexes in a database where there have been regular daily index rebuilds and statistic updates AND auto update stats is turned on.

    Agreed. The one time I've seen perf problems from bad stats was on a large table (50 million rows), clustered index on an Effectivedate column, and most queries doing a range search on effectivedate < getdate()-1. About 100 000 rows per day got added and no updates were done

    The system had a custom index rebuild job that, once a week, rebuilt indexes with fragmentation > 10%. This particular table, because the cluster was on an ascending column, didn't get fragmented. I found that the auto stats didn't kick in frequently enough because of the range query done.

    Once the stats got more than about 3 days out of date, performance would go through the floor because the optimiser assumed 0 rows, but would get 100 000.

    I'll often suggest people check the stats first, because it's easy to do. Check, that is. not update unless it looks necessary.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I wasn't having a go it's just that I can't believe that a production database wouldn't have an update stats job run every day( night / quiet period / whatever ) as it's something I've done from whenever way back, in fact in ingres 6 ( my first proper rdbms ) until you ran an update stats the optimiser didn't see an index which had been created. With 2005 update stats is even better as it doesn't just blanket update.

    However it never fails to amaze me that when I investigate poorly performing applications/databases how often there are no proper maintenance routines. ( I'm also quite amazed how many senior IT "managers" often want to disable stats and index rebuilds to "improve" performance - this often comes from very unusual sources, microsoft navision support believe that for best performance you should disable auto update and auto create stats to avoid blocking ! )

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • I have visited many many client sites where absolutely no maintenance plans exist and in some cases they have a very small window in their 24/7 operation to update stats or defrag or rebuild indexes.

    In my world you have to work with what you have and hope to influence / change the opinions of the in house DBA or Senior mangers in these cases its a case of identify and solve in a short a time as possible and with little or no degradation in production performance when doing some maintenance and a large improvement when you are finished.

    Agreed that in an ideal world all database should be well maintained but the reality is that a lot of them are not

    CodeMinkey

  • Oh and with regards to interviews. I try to remember that that person had to prepare , get suited and booted , go to the expense and stress of traveling to the interview not to mention the stress of the interview process as a whole and that I am not doing them a favor by interviewing them.

    Now I am not suggesting that you should hire someone that is technically incompetent or that you interview process should not highlite this .. it should.

    But being unduly harsh or trying to intimidate someone for the sake of it is something that should not be encouraged or condoned.

    It always worth remembering that while you might be interviewing someone today that same person could be interviewing you some time in the future or at the very least be in a position to influence opinion and while they may have been impressed by your technical knowledge they may not have been over impressed by your manner or for the way that they were treated.

    CodeMinkey

  • colin Leversuch-Roberts (12/21/2007)


    I wasn't having a go it's just that I can't believe that a production database wouldn't have an update stats job run every day( night / quiet period / whatever )

    Oh, I can believe it. At my current job, there was no reindexing job when I started there. Not to mention that all clustered indexes were on GUIDs. The fragmentation was actually fairly constant on all tables - 99.9%

    However it never fails to amaze me that when I investigate poorly performing applications/databases how often there are no proper maintenance routines.

    But didn't you know SQL maintains and tunes itself. <innocent look>

    I'm scared how often I hear "We don't need a DBA, the databases maintain themselves" I mentioned to a friend recently that i wouldn't want to work on her database (C# front-end auto-generates all the SQL) and her reply was along the lines of "But we'd never hire you anyway. The database is running fine and we don't need any DBAs"

    They're a credit-card processing company.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply