Table variable or Temparary table (tempdb)

  • This is intended as more of a discussion issue rather than a request for help.

    There have been many comments, both here and in other articles which I have read, that state the use of table variables provides vastly improved performance over temporary table. This I agree with from having made extensive use of them myself.

    The issue I would like to open for discussion is "At what point (if any) should Temporary tables be used in lieu of Table variables?"

    Temporary tables are written to tempdb and therefore use disk resources, since this is normally a resource which is in plentiful supply there is not normally any worries about how much data you can write to your temporary table. Table variables use memory in order to store any information which you insert into them. SQL server manages its memory for many things (e.g. data page caching, query plan caching, etc.), it is therefore reasonable to assume that in order for SQL to perform data manipulations with large amounts of data in a table variable it will have to flush some of the data cached in memory to make space available. This flushing of cached data could have serious performance issues and hence my question "At what point should I consider using temporary tables in lieu of table variables?"

  • There unfortunately is not a definitive answer. I would monitor cache hits, after every new time you make use of a table variable. If you notice a significant drop after one then you may have done exactly what you are talking about, flushing from the cache.

    The problem with an answer other than that thou is this. Suppose you have a box with mega amounts of 8GB of memory but the total size for all the dbs is 5GB and you have light transaction, a 2 GB table variable mayonly have a positive effect. But conversly you have that 8Gbs of memory and you database is hundreds of GBs in size, and transactions are high with large data amounts touched. Well even a small table variable may have a major impact on performance so you have to to test with w temporay table, doing this you find the table variable has a bit less total impact, so opt for the table variable.

    YOu have to test both options and consider the effects of amount of data to be place in table (variable or temp) an weigh the effects on your server.

    For the most part thou, small quick table operations do better as table variables, large sets or sets that will be used by the client for a good period of time do better as temporary tables.

  • This is a common misknowmer that table variables are in-memory structures. The data is actually written to disk. The benefit is that the scope and locking involved in using them.

    Reasons to use table variables

    Scope is limited to the context it was created

    Temp tables cannot be used in functions

    Locking

    less recompiles

    Reasons to use temp tables

    access the data in a different context (sub stored procedure)

    store the results of an exec statement

    if you need to index the data in multiple ways

    The one thing to be careful of is joining to both temp tables and table variables. Often people don't index the table. If it is not being used as the primary filter, then you need to index it.

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

  • Simon, can you please clarify where table variables are stored if not in memory (like any other variable).

    I have run traces while using table variables and no write operations are performed. I have also checked sysobjects on tempdb and although temporary tables obviously appear here, table variables do not. Please explain your statement that they are written to disk further.

    Thanks for the explanation of the various limitations and functionality associated with table variables, these I am already clear on, it was the threshold point between table variables and temporary tables I was seeking to discuss.

  • I hate to break into a conversation mid-topic but I was hoping to get more information on the following statement:

    "Reasons to use temp tables

    access the data in a different context (sub stored procedure)

    store the results of an exec statement

    if you need to index the data in multiple ways"

    I'm currently working on an application that was developed under SQL 6.5. We are now migrating this to SQL 7.0/2000 and finding that preformance is horrible in one of our main processes. It seems to be due to the heavy usage of temp tables. These tables are all created in a "driver" stored procedure and then referenced by the chain of subprocedures. In 6.5 this solution worked sufficiently to prevent user contention issues and performance was acceptable. Under SQL7/2000 the rules for recompilation changed and we are now finding that the subprocedures recompile every time they run due to the fact that they reference temp tables that were created outside of the procedure.

    I'm currently at a point where the process has been converted to use static tables instead of temp tables which resolves some (but not all) of the recompilation issues and yet performance is still not where it should be. The table variable is not an option due to the fact that this data needs to be available across multiple procedures and the table variable is only available to the local procedure. I know the design of this process is poor but development was rushed and it was copied virtually verbatim from a proprietary system so it has inherent flaws in the relational database scheme but if anyone has any ideas on how to retain the "localized" functionality of temp tables over a process using as many as 68 different stored procedures without having the negative impact of excessive recompilation I'd appreciate any suggestions!! Thanks.

    -Chris

  • I have thought of the simple things you can do and the one thing I keep coming back to is that you probably don't need that many SPs. you can do so much more in 2000 than 6.5.

    Aside from that are the recompiles occuring only when the sp is fired or during. The first is unavoidable, the second can be eliminated by correct ordeing and location of DDL, and DML statements.

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

  • If you're moving to SQL Server 2000, perhaps you could consider taking advantage of the new feature of using table returning User Defined Functions. For a primer on this new feature (User Defined Functions), read the article http://qa.sqlservercentral.com/columnists/nboyle/userdefinedfunctions.asp by Neil Boyle.

    In particular, read the section titled "Passing tables out of functions".

    This just might prove to be the grease you're looking for in your need for speed.

    Yours,

    Dave Owen

    Teligence Communications Inc.

    Vancouver, BC Canada

  • Another option if you don't want to convert all sub-procs to user-defined table functions, is to still use a temp table, but only reference it in the main proc. This may be a way to avoid recompiles of the sub procs. The sub-procs would just return a single result set, instead of using the main temp table. Something like this:

    
    
    create table #work
    (
    spid char(6),
    task char(8),
    loginame char(15),
    hostname char(20),
    blk char(4),
    command char(14),
    cpu char(12),
    [io] char(12),
    host char(5)
    )
    insert #work exec('pro_my_spwho')
    select * from #work
    drop table #work

    PS-The server won't let you use a table variable in this method.

    -Dan

    Edited by - dj_meier on 11/22/2002 2:49:20 PM


    -Dan

  • The problem is that is likely to cause a recompile after the populate of the temp table

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

  • To reduce the recompiles, try adding

    option (keepfixed plan)

    at the end of each sql statement in the procedures.

  • Would a Global temp table solve the problem?

  • I think you would still need to have the statement Option (keepfixed plan) at the end of each statement. That tells SQL Server to use the previously established execution plan instead of recompiling for the temp tables.

  • Like most everything, it depends. We use table valued user-defined functions instead of views for a lot of things. This makes for better performance in almost every case we've used them. I would say the difference usually comes down to easy of maintenance (the real cost to be concerned about) and procedure clarity. If a temp table is the best, we tend to go with it. If a variable works better, we use that.

    We don't have any rules of thumb other than if you think it's 1M rows, probably best not to chew up that much real memory.

  • I have run into issues in the past using Option (keepfixed plan). Flaky things like inconsistent recordset returns and such. It could possibly have been that I was attempting to use it inappropriately, but since then have avoided it kinda by habit. I didn't spend a lot of time tracking down exactly why, simply quit using it. I developed the opinion that if SQL decided it needed a recompile, it had a reason for it, and by bypassing it, I created problems for myself. My use of it was in the context of exactly what your talking about, temp tables causing recompiles, and since the previous statements, I am now wondering if it is in fact, OK to do. Have others used it to successfully reduce recompiles in this context?

  • I have used this option a number of times and had success with it. Sometimes performance improved by a factor of ten. It is something that needs to be tested on a case by case basis, though. I have also had situations where performance was slightly degraded.

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply