Temp tables vs. table variable speeds, temp tables faster???

  • I was curious as to how fast table variables were compared to temp tables. I had assumed that the table variable would be processed faster than the temp table but I was surprised and found the opposite to be true. I had initially tested it on my desktop and wondered if I had too many programs running that interfered with the query, so I tested the sample script on a low activity server we have with 16GB of ram (only about 1Gig being used), and still found the temp table to be faster. I even tried it on a second server, similar results. Here are the scripts I ran. Both are identical except for the table variable/temp table syntax.

    This one runs on a temp table.

    set nocount on

    create table #test_table

    (

    rownum int identity(1,1),

    rowcheck int,

    rowvalue uniqueidentifier default newsequentialid()

    )

    insert into #test_table(rowcheck)

    values(0)

    declare @iter int

    select @iter = 0

    while @iter < 20 -- simple loop that doubles the number of rows each iteration

    begin

    insert into #test_table(rowcheck)

    select rowcheck from #test_table

    select @iter = @iter + 1

    end

    update #test_table

    set rowcheck = abs(checksum(rowvalue, rownum))

    -- drop table #test_table

    Same script, but run on a table variable.

    set nocount on

    declare @test_table table

    (

    rownum int identity(1,1),

    rowcheck int,

    rowvalue uniqueidentifier default newsequentialid()

    )

    set nocount on

    insert into @test_table(rowcheck)

    values(0)

    declare @iter int

    select @iter = 0

    while @iter < 20

    begin

    insert into @test_table(rowcheck)

    select rowcheck from @test_table

    select @iter = @iter + 1

    end

    update @test_table

    set rowcheck = abs(checksum(rowvalue, rownum))

    The first script with #test_table took about 9 seconds while the second script with @test_table took about 15 seconds. I had initially thought the doubling loop on #test_table would cause it to write slower as it is writing to a hard drive rather than memory, but it did not.

    Confused here and looking for some clarification.

    Thanks.

    Gaby
    ________________________________________________________________
    "In theory, theory and practice are the same. In practice, they are not."
    - Albert Einstein

  • Thanks Richard, it did shed quite a bit of light. I do have one followup on it though. If the table variable gets written to the drive by the system (so it's not necessarily memory resident), shouldn't the speeds at least be somewhat closer?

    Gaby
    ________________________________________________________________
    "In theory, theory and practice are the same. In practice, they are not."
    - Albert Einstein

  • I would say the difference is probably attributable to statistics. If you look at the end of the process, the scans & reads for the two are VERY different. For the #temp table:

    Table 'Worktable'. Scan count 1, logical reads 1397785, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table '#test_table_________________________________________________________________________________________________________000000000005'. Scan count 1, logical reads 2143, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    and for the table variable

    Table '#117F9D94'. Scan count 1, logical reads 528567, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'Worktable'. Scan count 1, logical reads 1397784, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    The temporary table can look through statistics where as the table variable has to simply scan the entire table every single time.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • ...The temporary table can look through statistics where as the table variable has to simply scan the entire table every single time.

    Hmmm...I get it. I guess the first rule I should learn with SQL Server (and any other DB server out there), is never assume what you see. Do a test run and dig into what is under the engine, making it run the way it does.

    Thanks guys for both your tips. I'll be more judicious in using table variables and stick with temp tables for really large datasets.

    Gaby
    ________________________________________________________________
    "In theory, theory and practice are the same. In practice, they are not."
    - Albert Einstein

  • Gaby A. (12/30/2008)


    ...The temporary table can look through statistics where as the table variable has to simply scan the entire table every single time.

    Hmmm...I get it. I guess the first rule I should learn with SQL Server (and any other DB server out there), is never assume what you see. Do a test run and dig into what is under the engine, making it run the way it does.

    Thanks guys for both your tips. I'll be more judicious in using table variables and stick with temp tables for really large datasets.

    Your own signature makes a lot of justice to what you just learned 😀


    * Noel

  • I believe the UPDATE statement at the end of each script probably has alot to do with it. Running Profiler on these two scripts on my development server (4 GB RAM) I see fairly close durations for the INSERTs, even though the last 3 iterations show a little slower with the table variable method. The big difference is that the UPDATE on the table variable is consistantly nearly twice the duration of the UPDATE on the temp table.

    TextData CPU Duration Reads Writes RowCounts

    insert into #test_table 0 341 15 0 1

    insert into #test_table 0 304 14 0 1

    insert into #test_table 0 123 12 0 2

    insert into #test_table 0 124 18 0 4

    insert into #test_table 0 365 35 0 8

    insert into #test_table 16 301 54 0 16

    insert into #test_table 0 479 102 0 32

    insert into #test_table 0 916 199 0 64

    insert into #test_table 0 1857 424 0 128

    insert into #test_table 0 3410 798 0 256

    insert into #test_table 15 7146 1621 2 512

    insert into #test_table 0 13795 3215 4 1024

    insert into #test_table 16 27782 6503 14 2048

    insert into #test_table 47 55403 13021 28 4096

    insert into #test_table 109 111196 25993 55 8192

    insert into #test_table 235 221678 51989 110 16384

    insert into #test_table 437 442762 103963 220 32768

    insert into #test_table 891 886476 207890 156 65536

    insert into #test_table 1781 1777788 415732 0 131072

    insert into #test_table 3610 3642115 910377 224 262144

    insert into #test_table 11188 5834477 1662785 1752 524288

    update #test_table 21843 9559271 3217842 1123 1048576

    insert into @test_table 0 324 15 0 1

    insert into @test_table 0 173 9 0 1

    insert into @test_table 0 117 12 0 2

    insert into @test_table 0 148 18 0 4

    insert into @test_table 0 192 30 0 8

    insert into @test_table 0 305 54 0 16

    insert into @test_table 0 516 102 0 32

    insert into @test_table 0 909 199 0 64

    insert into @test_table 0 1816 424 0 128

    insert into @test_table 0 3557 798 0 256

    insert into @test_table 16 6976 1616 0 512

    insert into @test_table 15 13895 3215 0 1024

    insert into @test_table 16 27727 6498 0 2048

    insert into @test_table 47 55766 13016 0 4096

    insert into @test_table 109 111601 25988 0 8192

    insert into @test_table 234 224847 51984 0 16384

    insert into @test_table 438 448128 103958 0 32768

    insert into @test_table 906 899727 207881 0 65536

    insert into @test_table 1813 1830844 415725 0 131072

    insert into @test_table 3703 3697985 910319 983 262144

    insert into @test_table 7453 7479938 2004808 1099 524288

    update @test_table 14609 16804301 1052870 0 1048576

    This follows what I've seen in my own code too. If you're just doing set based INSERTs or SELECTs performance will depend on the size of the data, but if you need to do UPDATEs, avoid table variables!

  • My general rule of thumb is only use table variables for less than 100 rows of data. Even then, things like transaction management and the ability to use DDL are more import in deciding which to use.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • My general rule of thumb is to never use table variables unless the system forces me to. That means the ONLY place I use them is in UDF's.

    The reason why I have that particular rule of thumb is because Temp Tables are easier to troubleshoot from QA and SMS because they persist even after a run stops. With table variables, you have to run everything for each run attempt instead of being able to run just a section more than once.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • My rule of thumb is the same as Jeff's with one exception.

    I've found some queries that are only, ever, doing a full scan of the temp table can run faster with table variables. I don't have a hard & fast rule for applying this, but if I see nothing but scans, I will test the use of table variables in that situation.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • The only reason I sometimes use table variables instead of temp tables is speed. Yeah, it's cutting milliseconds off the processing time, but sometimes that matters. Since table variables don't get logged, it's that much less IO, which means that much more speed.

    Over about 100 rows of data, the stats in temp tables can make up for that and give me greater overall speed. Depends on the case at hand.

    Of course, the other place to use table variables is if you need a recordset that can be added to and will still be intact after a rollback in error handling. If your Catch block has a rollback in it, inserting data into a table variable during the process before the error can allow you to insert that from the table variable into a temp or perm table even after the rollback, which can be quite valuable for error logging or immediate debug. But that's true of any variable.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Heh... "catch block"... that's another one... used properly, it's a great tool... but, for a huge number of the developers (using the term very loosly) I know, it's led to "programming by exception" and not just in T-SQL, either... "let's see what color spaghetti sticks today".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden (1/6/2009)


    Heh... "catch block"... that's another one... used properly, it's a great tool... but, for a huge number of the developers (using the term very loosly) I know, it's led to "programming by exception" and not just in T-SQL, either... "let's see what color spaghetti sticks today".

    Oh trust me, I've seen some pretty horrible uses of Try...Catch. The best one (meaning, of course, the worst) was where a variable was assigned the scope_identity of an insert, and then the variable was returned after the catch. Which meant that the rollback in the catch undid the insert, but the variable still had the value and still returned it to the calling code. Which then passed it to another module (in the front end), which tried to do things with an ID for a non-existent row.

    That one made it into production, because, of course, the dev's test-cases didn't ever throw an error and invoke the catch block. He didn't understand that you need to make sure errors are caused in testing, so you can see what those do.

    He spent two days trying to figure that one out, while users were having problems with it.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Here is an example I slapped together just yesterday on another thread that shows how bad table vars can be for even very FEW rows in them. Here is the thread:

    http://qa.sqlservercentral.com/Forums/Topic625172-1373-7.aspx#bm631148

    and here is the code:

    use adventureworks

    go

    dbcc freeproccache

    set statistics IO on

    set statistics time on

    go

    declare @tab table (ProductID int)

    insert @tab values (870)

    insert @tab values (873)

    insert @tab values (921)

    insert @tab values (712)

    insert @tab values (707)

    insert @tab values (711)

    select ActualCost

    from Production.TransactionHistory th

    inner join @tab t on t.ProductID = th.ProductID

    (16895 row(s) affected)

    nested loop bookmark lookup plan

    note the estimated rows in the join are 257, actual 16895

    420 milliseconds avg duration

    51789 total logical reads

    dbcc freeproccache

    go

    create table #tab (ProductID int)

    insert #tab values (870)

    insert #tab values (873)

    insert #tab values (921)

    insert #tab values (712)

    insert #tab values (707)

    insert #tab values (711)

    select ActualCost

    from Production.TransactionHistory th

    inner join #tab t on t.ProductID = th.ProductID

    (16895 row(s) affected)

    hash join plan, estimated and actual rows are equal

    320 milliseconds duration

    793 total logical reads

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • That's a cool example, Kevin. But, if you keep telling people how bad these things are, they may actually stop using them someday and that'll be one less way to make money. :hehe:

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply