#Temp table indexes uncertainty

  • Hi,

    I have created a series of temp tables the first being #table_1. I then create create #table_2 by selecting INTO #table_2 from #table_1 and adding some additional columns. I repeat this procedure several times. After I have created #table_1 I index several key columns which are used to join onto other tables that will from part of #table_2, #table_3 etc. Will this index carry on through to all the temp tables? ie #table_2 is created from #table_1 so does that imply #table_2 is now also indexed?

    I realize the use of temp tables is discouraged but I'm breaking down my query into 15 parts and temp tables seem to be a great way to track errors and make minor changes without having to create the entire table from scratch.

    Also if I drop #table_1 after 2 is created and then drop #table_2 after 3 is created etc will this also improve performance?

    For example (this is very simplified)

    Select

    al.Policyline

    ,al.PaymentFrequency

    ,ccx.ClaCaseID

    ,ccx.Age

    ,ccx.PaymentProfile

    ,ccx.PolicyLineVersionID

    INTO #table_1

    From Staging.ClaCases ccx

    LEFT JOIN Staging.PolicyLines al

    ON ccx.PolicyLineVersionID = al.PolicyLineVersionID

    WHERE ProductLine = 'CAR'

    GO

    CREATE INDEX INDEX_1 ON #InitialTable_1(Policyline)

    CREATE INDEX INDEX_2 ON #InitialTable_1(ClaCaseID)

    CREATE INDEX INDEX_3 ON #InitialTable_1(PolicyLineVersionID)

    GO

    Select it.*

    ,ce.IncidentDate

    ,ce.EventTyp

    INTO #table_2

    from #table_1 it

    LEFT JOIN Staging.ClaEvents ce

    ON it.ClaEventID = ce.ClaEventID

    GO

    PS: I would provide sample date but I don't really see the need for it in this case.

  • According to MSDN, indexes are not transferred:

    "Indexes, constraints, and triggers defined in the source table are not transferred to the new table, nor can they be specified in the SELECT...INTO statement. If these objects are required, you must create them after executing the SELECT...INTO statement."

    http://msdn.microsoft.com/en-us/library/ms188029.aspx

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • Thanks for the info!!!

    I have two related questions

    1)Can I assume dropping tables will free memory as I proceed with the query? After #table_2 is created drop #table_1 etc

    2)After each SELECT INTO query I end off with the command GO in order for it to proceed to the next part of the code. Is this an effective way to do it and will it slow down my query?

    Thanks!!!

  • I thought of something else and I'd like so feedback if possible. I know very little about stored procedures but could this possibly be used as an alternative to the GO and would this improve performance?

  • mic.con87 (12/21/2011)


    Thanks for the info!!!

    I have two related questions

    1)Can I assume dropping tables will free memory as I proceed with the query? After #table_2 is created drop #table_1 etc

    2)After each SELECT INTO query I end off with the command GO in order for it to proceed to the next part of the code. Is this an effective way to do it and will it slow down my query?

    Thanks!!!

    1) Good question, I don't know 🙂

    2) The GO statement is not really a TSQL statement. It is a batch seperator recognised by SSMS.

    http://msdn.microsoft.com/en-us/library/ms188037(v=sql.90).aspx

    Basically it tells that the current batch can be send to the SQL Server instance. The only performance impact that it has is when you have a really long script (say 1000 individual statements) and it fails, without GO it has to roll back completely. With GO, SQL Server only needs to roll back the batch that failed.

    Stored procedures are something totally different. You can compare them with procedures in regular programming languages. They can improve performance, as their execution plan can be cached. But I'm not really up to date with all the SQL Server internals, I'm just a BI developer 😀

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • Thanks once again for the feedback!!! Hopefully someone will be able to provide some assistance with the dropping of tables question:-)

  • Dropping a #temp table will remove it from tempdb freeing up this disk space. Since the object has been dropped its allocated memory will become available for use again. The original pages from the first SELECT may or may not still be in memory but these wont be referenced unless you touch the object again.

    SQL Server usually wont release memory (RAM) at all unless forced to.

  • Koen Verbeeck (12/21/2011)


    2) The GO statement is not really a TSQL statement. It is a batch seperator recognised by SSMS.

    http://msdn.microsoft.com/en-us/library/ms188037(v=sql.90).aspx

    Its also useful as a quick and dirty way of repetition such as inserting 1000 test rows.

    GO 1000

    Repeats the preceeding batch 1000 times.

  • Great thanks for the feedback;-)

  • Koen Verbeeck (12/21/2011)


    The only performance impact that it has is when you have a really long script (say 1000 individual statements) and it fails, without GO it has to roll back completely. With GO, SQL Server only needs to roll back the batch that failed.

    Rollbacks are to the transaction start, not the batch start, so if a batch of 1000 statements with no begin tran loses connection on the last statement, only the last statement is rolled back. If you have 1000 batches in a single transaction (begin tran in the first one) and in the last batch the connection drops, the entire thing has to be rolled back.

    Statement failures don't (in most cases) cause a transaction to roll back.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/21/2011)


    Koen Verbeeck (12/21/2011)


    The only performance impact that it has is when you have a really long script (say 1000 individual statements) and it fails, without GO it has to roll back completely. With GO, SQL Server only needs to roll back the batch that failed.

    Rollbacks are to the transaction start, not the batch start, so if a batch of 1000 statements with no begin tran loses connection on the last statement, only the last statement is rolled back. If you have 1000 batches in a single transaction (begin tran in the first one) and in the last batch the connection drops, the entire thing has to be rolled back.

    Statement failures don't (in most cases) cause a transaction to roll back.

    Ok, that's what I meant, but you phrased it much better. 🙂

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply