Slow performed SP fix by sp_recompile

  • I'd appreciate if someone can help me explain an odd problem I'm facing.

    I have a SQL2K Sp3a server with 8 GB memory and 8 processor on a cluster environment with max. memory for SQL set to 7GB. DB auto create statistics set to on (the rest are unchecked). The DB size is 30GB data/10 Gb log. The syetem is transactional from 7:00am ~7:00pm and from 8:00pm to 4:00am is running batch. During the batch, we do 'controlled' transaction log backup thru. critical path of the batch. The first transaction backup normally accumulate 5 GB+ volume.

    Here is my problem and need someone's help.  After completed first transaction log backup,   the second batch sometime seems running OK but other times are not. I'd narrow down a slow running SP. If the batch slow down, I run s sp_recompile, the problem will go away.  I'm wondering what's the magic sp_recompile does ? I can't come up a logical reason that make sense to myself, nevertheless, to convince the I/T manager. I did run trace on the process but neither sp_recompile happen on good days or bad days. Can someone help me explain how could a sp_recompile did this magic and how can I prove it.

     

     

  • After first batch update, Large numebr of data updated that affect the statistics of database but somehow the original execution plan of second SP doesn't use the latest statistics information, By recompiling the SP, you could reoptimize the SP.

  • I fully understand what you said. But, I'm looking for someway to prove it and explain why it happen on one day but not the other while the activities are almost same.

  • I would add "with recompile" option to the second SP to see how is going on in next few days.

    You may update the databases statistics with higher percentage sampling or full scan after first batch to try to make second SP aware of the changes of statistics and recompile itself eventually.

     

  • I agree with Allen.

    Even though it was okay one day the potential problem was still there but didn't go over the bottelneck so you didn't see and sql eventually update the stats and when it was not, it actual go over the bottleneck.. Try that with recompile. It sounds like perfect place to use that option. Also one more thing that you might want to check is that "Pages/sec" and Max Memory usesage during that time from perfmon. I can see that you are using AWE option since you have more then 4G and you spared the 1G for AWE but even if you set the Max memory to be 7G, sql can use more then 7G. Max Memory 7G means only for Database Transactions. That is not included for dlls that SQL uses like DTS package and other extended procedure.

     

     

  • I note you have auto create stats turned on, however unless you update them regularly you'll probably have problems as the stats go out of date.

    A number of problematic etl loads and batch jobs have been resolved by updating stats and rebuilding indexes inbetween and during.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Adding recompile option to the SP is not a viable solution, the recompile took 600ms. The SP called more than 10,000 times, this will greatly impact my batch windows. I understand update stats. and rebuild index will help, but with the underlying tables that use by the SP consist of more than 25 millions rows, index rebuild is not an option, either.  I'm just wondering I should ho with schedule job to re_compile the SP or use hint within SP. But, this problem never happen on my Oracle or Sybase env. It seems SQL still have a long way to go.

  • Well............. it is often my experience that blaming the server or database is usually the cop out for poor code or processes... however, I'd suggest you turn autoupdate stats on, out of date stats are often a major factor for poor performance, I can't compare to Oracle or Sybase but certainly when I worked with Ingres if you didn't update stats etc. you would sometimes suffer similar problems.  25 million rows isn't too big and you can issue specific update stats commands against a specific table which shouldn't take too long.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Try using index hints in the stored procedure to force the stored procedure to always use those indexes regardless of whether the statistics are up to date.

    By the way, your comment about this not happening on Oracle or Sysbase is relative.  We use Oracle here for a very large database (we just upgraded to 1 TB of disk space and a couple of tables are over 100 million rows).  DBMS Stats recomputes the statistics as needed and our stored procedures are recompiled every night.  However we recently ran into a similar problem you are having.  One day the query was fast, the next day it was slow.  Although we prefered not to do it, adding index hints to the query solved our problem.

  • Thanks for all your reply. I think I will proceed with index hint.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply