Stored Proc with table variables running very slow?

  • We have a stored procedure which was running fine from 5 years but now it is running very slow. It is using table variables to load data from select statements. This stored proc is running quickly in UAT (which has almost equalent data as in Production) in 3secs, but in production it is running for ever and ever.

    If we change the table variables to temp tables, then it is running in 8secs in production. I was in an assumption that table variables are much better than temp tables. Also production server is more robust (memory and CPU) to handle the load than UAT server.

    Can any one let me know what is wrong with the table variables? Also is it possible that data is more fragmented in production which is causing the slowness, if so how can I find out that?

    Thanks In Advance for any replies.

  • Table variable are limited (IMO) if you are dealing with a large number of records.

    I have found that using large record set temp table can offer better performace (if tempdb is configured accordingly). The reason been, you are unable to add additional indexes so that selects used later in the SP can make use of the indexes.

    I would look at the execution plan of the SP in your Prod env and see where most of the cost is. It could be that your current indexes need rebuilding/reindexing or that your stats are stale!

  • It depends. If the number of rows of data in the table variables is small, then they work well. If the number of rows increases, then temp tables become more efficient. the reason is that regardless of the number of rows in a table variable, the Query Optimizer treats table variables as if they have only 1 row, and there are no statistics with table variables either. This is not the case with temp tables.

    Does this brief explanation help?

  • san43 (1/29/2010)


    We have a stored procedure which was running fine from 5 years but now it is running very slow. It is using table variables to load data from select statements. This stored proc is running quickly in UAT (which has almost equalent data as in Production) in 3secs, but in production it is running for ever and ever.

    If we change the table variables to temp tables, then it is running in 8secs in production. I was in an assumption that table variables are much better than temp tables. Also production server is more robust (memory and CPU) to handle the load than UAT server.

    Can any one let me know what is wrong with the table variables? Also is it possible that data is more fragmented in production which is causing the slowness, if so how can I find out that?

    Thanks In Advance for any replies.

    Check out this article for a comparison of table variables and temp tables.

    Comparing Table Variables to Temporary Tables[/url]

    Most likely the issue is that the optimizer can only "see" 1 row in a table variable, and the plan is for that. When you have more, the plan may not be the optimal plan anymore.

    Wayne
    Microsoft Certified Master: SQL Server 2008
    Author - SQL Server T-SQL Recipes


    If you can't explain to another person how the code that you're copying from the internet works, then DON'T USE IT on a production system! After all, you will be the one supporting it!
    Links:
    For better assistance in answering your questions
    Performance Problems
    Common date/time routines
    Understanding and Using APPLY Part 1 & Part 2

  • Depending on the scenario, table variables could be better than temp tables. The same is true of temp tables being better than table variables. No single type is the perfect solution for all scenarios. Table variables will decline in performance when a large number of records are required. Another variable is whether or not the table variable has been indexed (limited indexing available) or not. The same can be said of temp tables, however the indexing options of temp tables are more open.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • As some have already stated, the optimizer thinks all Table Variables only have one row and, therefor, may not pick the best execution plan when the data finally reaches what some call the "Tipping Point". Every server has one.

    Another bad part about Table Variables is that they do not and cannot be made to use statistics in any way, shape, or form.

    Last but not least, there's a myth that Table Variables are "memory only". Nothing could be further from the truth. Table Variables and Temp Tables will both operate as "memory only" if they fit into memory. Both will use TempDB disk space if they cannot.

    I sometimes lose a tiny bit of performance by not using Table Variables... or at least in the short run, I do. But, I don't ever have to worry about an increase in scale causing what you just went through. Temp Tables are also easier for me to troubleshoot code with because a Temp Table will persist in SSMS. You have to rebuild a Table Variable every time you want to run another test because it has the lifetime expectancy the same as any variable.

    Not that I recommend it for everyone (I don't want to start a holy war), but the only place I use Table Variables is in User Defined Functions and the only reason why I use them there is because you can't use Temp Tables in a function.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • san43 (1/29/2010)


    Also is it possible that data is more fragmented in production which is causing the slowness, if so how can I find out that?

    My apologies... I missed that part. Lookup DBCC SHOWCONTIG in Books Online. Not only will you find out how to determine what the fragmentation levels of all indexes is, but they also have a reasonable script to either defrag or rebuild indexes based on the level of fragmentation.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Table variables are also not affected by transactions. They are good for capturing data inside transactions regardless if the transaction is committed or rolled back.

  • I forgot about that and that's very true. Have you ever seen code that does that, though?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Actually used it to capture data changes while developing a data modification process when I needed to do destructive testing in a test environment and didn't want to have restore the database each time.

  • Lynn Pettis (1/29/2010)


    Actually used it to capture data changes while developing a data modification process when I needed to do destructive testing in a test environment and didn't want to have restore the database each time.

    That's actually the first time I've heard of someone using it for a practical reason. Thanks, Lynn.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden (1/29/2010)


    Lynn Pettis (1/29/2010)


    Actually used it to capture data changes while developing a data modification process when I needed to do destructive testing in a test environment and didn't want to have restore the database each time.

    That's actually the first time I've heard of someone using it for a practical reason. Thanks, Lynn.

    It just made it easier to actually do updates, capture the changes, rollback the update, then write the changes to another table to verify if the changes were properly made or if I needed to make any adjustments to the criteria for the update.

    Didn't have to do any restores between testing. Saved plenty of time.

  • I completely agree with all the arguments/suggesstions that every one has made here. My doubt is the same number of rows are retured for table variable in UAT( which has less memory and CPU than production) as well and which is running in 3secs.

    The only difference I observed by checking the sys.dm_db_index_physical_stats that percentage fragmentation in production is more than in UAT. Does this make a huge difference of running it for 1hr compared to 3secs?

  • san43 (2/1/2010)


    I completely agree with all the arguments/suggesstions that every one has made here. My doubt is the same number of rows are retured for table variable in UAT( which has less memory and CPU than production) as well and which is running in 3secs.

    The only difference I observed by checking the sys.dm_db_index_physical_stats that percentage fragmentation in production is more than in UAT. Does this make a huge difference of running it for 1hr compared to 3secs?

    That could contribute to it. Is the load on the UAT server the same as the production server? If not, that is another factor to consider.

  • Load is probably 10% less on the day I was testing. Also the production server is pretty huge server when compared to UAT.

Viewing 15 posts - 1 through 15 (of 36 total)

You must be logged in to reply to this topic. Login to reply