Are the posted questions getting worse?

  • Hugo Kornelis - Friday, October 12, 2018 11:43 AM

    Brandie Tarvin - Friday, October 12, 2018 10:00 AM

    frederico_fonseca - Friday, October 12, 2018 9:46 AM

    Post gone - maybe one of the admins wishes to revive it  but I don't see any point on it.

    That would explain the weird hiccup I saw in Chris's last post. The thread is technically still there, but all of his posts are gone. Every last one.

    I was able to do a quick check on that thread a few hours back, but unable to post. I wanted to put in at least one more reply, not for the sake of the OP but for the sake of others in the thread, to correct some incorrect information I saw posted from others there. I believe most who were in that thread are here as well so I'll just post it here.

    There wa a question at one time of why writing to a permanent table took more time than writing to a temporary table, and I believe that someone said "basically yes" to the question whether this is because permanent tables start on disk and temporary tables start in memory. That is NOT correct. All tables (permanent, temporary, and even table variables) are stored in a database - tempdb in the case of the latter two. They are stored on 8K database pages. Writes to those pages, including the writes associating with creating the table, are done in memory (buffer pool) and the pages are marked dirty. A background checkpoint process occassionally wakes up, sweeps through memory to look for dirty pages and writes them to disk. The same happens when an explicit CHECKPOINT statement runs, or when the database is shut down.

    So all tables effectively start in memory. All tables remain only there until a checkpoint runs and causes the table's data pages to be written to disk. If the table happens to have already been deleted before a checkpoint ever runs (and due to the nature of how tables are used, that is far more likely for temporary tables than for permanent tables), then they might be removed before ever having a checkpoint, in which case they indeed are never written to disk. This can happen to temp tables and table variables that are short-lived, but if you create a permanent table and drop it fast enough the same happens. And conversely, if you use temp tables or table variables and then hold your connection open for long enough, they will be checkpointed as normal.

    As to the different execution times that the Mystery View Guy talked about, my suspicion is that the permanent table was made in a user database and hence subject to full logging (SQL Server needs info in the log file for doing a rollback but also for recovery if the DB goes down mid-transaction). The temporary table is by definition in tempdb which never uses recovery - so SQL Server all by itself reduces the amount of information it logs, keeping only what is needed for a rollback. And if you then switch from temp table to table variable, you get even less logging because those are not subject to transactions and hence never require a rollback either.
    (I also suspect that the example Grant posted, which I didn't actually look at, used hundreds or thousands of micro-transactions instead of embedding all code within a single transaction - which explains why the logging overhead was as big as it was. The log file uses an unbuffered write-ahead system when a transaction is committed. If you do a thousand writes and wait for confirmation from the disk system every time, you will notice the time it takes. If you use a single transaction, the overhead will be far less).

    You know, I believe in his original post, the "temp" tables were created as "regular" tables name "temptable1" and "temptable2"
    Note the lack of # to create actual temporary tables...

    Ah well, hindsight, etc, etc, etc...

  • I don't know where to start but this one is nothing but problems
    😎

  • Eirikur Eiriksson - Saturday, October 13, 2018 2:50 AM

    I don't know where to start but this one is nothing but problems
    😎

    ahhhh.. and 

     I'm pretty good with SQL, so no newbs please. from (https://qa.sqlservercentral.com/Forums/2002689/Query-slow-Help-improve)

     

    I replied to 2 of his posts - might turn out to be "interesting" threads

  • frederico_fonseca - Saturday, October 13, 2018 3:12 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 2:50 AM

    I don't know where to start but this one is nothing but problems
    😎

    ahhhh.. and 

     I'm pretty good with SQL, so no newbs please. from (https://qa.sqlservercentral.com/Forums/2002689/Query-slow-Help-improve)

     

    I replied to 2 of his posts - might turn out to be "interesting" threads

    All I can see is a procedural programmer's approach to a set based problem, a problem waiting to happen.
    😎

  • Eirikur Eiriksson - Saturday, October 13, 2018 3:54 AM

    frederico_fonseca - Saturday, October 13, 2018 3:12 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 2:50 AM

    I don't know where to start but this one is nothing but problems
    😎

    ahhhh.. and 

     I'm pretty good with SQL, so no newbs please. from (https://qa.sqlservercentral.com/Forums/2002689/Query-slow-Help-improve)

     

    I replied to 2 of his posts - might turn out to be "interesting" threads

    All I can see is a procedural programmer's approach to a set based problem, a problem waiting to happen.
    😎

    All I can see is an arrogant sot.  Most newbies could stuff him when it comes to writing database code.  I hope he learns his lesson from the code Frederico posted.

    Where do people with such uninformed arrogance come from?

    Ah... scratch that... I'm pretty sure that, especially since contractors are involved, the posts are actually being "role reversed" by a very smart DBA or SQL Developer that's trying to convince management that the contractor (person or company) is/are complete and total idiots when it comes to T-SQL and databases in general.  If I'm right, this man understands the human element better than any of his managers.  See my posts below and at the following.
    https://qa.sqlservercentral.com/Forums/FindPost2002914.aspx

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Awesome timing. I'm busy updating the training for my grads next year and was looking for some quick exercises I could give them as a break from lectures. 'What's wrong with this?' with his unnormalised table (with the column names mangled a bit) will be a great discussion starter.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Jeff Moden - Saturday, October 13, 2018 9:42 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 3:54 AM

    frederico_fonseca - Saturday, October 13, 2018 3:12 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 2:50 AM

    I don't know where to start but this one is nothing but problems
    😎

    ahhhh.. and 

     I'm pretty good with SQL, so no newbs please. from (https://qa.sqlservercentral.com/Forums/2002689/Query-slow-Help-improve)

     

    I replied to 2 of his posts - might turn out to be "interesting" threads

    All I can see is a procedural programmer's approach to a set based problem, a problem waiting to happen.
    😎

    All I can see is an arrogant sot.  Most newbies could stuff him when it comes to writing database code.  I hope he learns his lesson from the code Frederico posted.

    Where do people with such uninformed arrogance come from?

    Either that or he's one very sly fox.  Consider the following scenario...

    Could it be that he's actually VERY smart about SQL Server and T-SQL and is stuck with a whole herd of "developers" (lower case very intentional , in this case) that write the kind of crap code that he posted and cannot convince them to do otherwise and is actually getting feedback from them about how good they think they are?  And could it be that he actually made a deal with the developers that if he posted the code even biased in their "I'm pretty good at this" direction that folks on this forum would trash the original code and then show the right way to do it?

    It would make for a VERY EFFECTIVE slap down of the people that he's fighting with.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden - Saturday, October 13, 2018 10:15 AM

    Jeff Moden - Saturday, October 13, 2018 9:42 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 3:54 AM

    frederico_fonseca - Saturday, October 13, 2018 3:12 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 2:50 AM

    I don't know where to start but this one is nothing but problems
    😎

    ahhhh.. and 

     I'm pretty good with SQL, so no newbs please. from (https://qa.sqlservercentral.com/Forums/2002689/Query-slow-Help-improve)

     

    I replied to 2 of his posts - might turn out to be "interesting" threads

    All I can see is a procedural programmer's approach to a set based problem, a problem waiting to happen.
    😎

    All I can see is an arrogant sot.  Most newbies could stuff him when it comes to writing database code.  I hope he learns his lesson from the code Frederico posted.

    Where do people with such uninformed arrogance come from?

    Either that or he's one very sly fox.  Consider the following scenario...

    Could it be that he's actually VERY smart about SQL Server and T-SQL and is stuck with a whole herd of "developers" (lower case very intentional , in this case) that write the kind of crap code that he posted and cannot convince them to do otherwise and is actually getting feedback from them about how good they think they are?  And could it be that he actually made a deal with the developers that if he posted the code even biased in their "I'm pretty good at this" direction that folks on this forum would trash the original code and then show the right way to do it?

    It would make for a VERY EFFECTIVE slap down of the people that he's fighting with.

    It would.. on the other hand someone that is very good at SQL and is given a code that runs very slow as he said, is capable of changing the code and show the other developers how easier it is to code, and how much faster it runs. 

    I do that all the time at work with newer devs (to sql) . Don't just say its wrong, show how it can be changed and improved. Most learn once shown the possibilities.

  • frederico_fonseca - Saturday, October 13, 2018 10:52 AM

    Jeff Moden - Saturday, October 13, 2018 10:15 AM

    Jeff Moden - Saturday, October 13, 2018 9:42 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 3:54 AM

    frederico_fonseca - Saturday, October 13, 2018 3:12 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 2:50 AM

    I don't know where to start but this one is nothing but problems
    😎

    ahhhh.. and 

     I'm pretty good with SQL, so no newbs please. from (https://qa.sqlservercentral.com/Forums/2002689/Query-slow-Help-improve)

     

    I replied to 2 of his posts - might turn out to be "interesting" threads

    All I can see is a procedural programmer's approach to a set based problem, a problem waiting to happen.
    😎

    All I can see is an arrogant sot.  Most newbies could stuff him when it comes to writing database code.  I hope he learns his lesson from the code Frederico posted.

    Where do people with such uninformed arrogance come from?

    Either that or he's one very sly fox.  Consider the following scenario...

    Could it be that he's actually VERY smart about SQL Server and T-SQL and is stuck with a whole herd of "developers" (lower case very intentional , in this case) that write the kind of crap code that he posted and cannot convince them to do otherwise and is actually getting feedback from them about how good they think they are?  And could it be that he actually made a deal with the developers that if he posted the code even biased in their "I'm pretty good at this" direction that folks on this forum would trash the original code and then show the right way to do it?

    It would make for a VERY EFFECTIVE slap down of the people that he's fighting with.

    It would.. on the other hand someone that is very good at SQL and is given a code that runs very slow as he said, is capable of changing the code and show the other developers how easier it is to code, and how much faster it runs. 

    I do that all the time at work with newer devs (to sql) . Don't just say its wrong, show how it can be changed and improved. Most learn once shown the possibilities.

    If what I speculate is true, they may not be giving him the opportunity to do so.  I've worked in shops such as that and sometimes drastic things must be done to get people's attention.  I do agree that if he told them that it was "crap" right out of the gate, that's not a good way to get someone's attention and doesn't exactly help people gain confidence in you.

    Based on the wording of all three of his posts, I think that he's probably a very wise solitary DBA that's trying to convince management that the contractor they hired sucks so bad they have their own gravitational field but management won't listen.  As what happens with so many DBAs, they're frequently accused of being "emotional" or that they're not, in fact, programmers and so couldn't possible know what they're talking about or what the code is doing.

    I could certainly be totally wrong but the more I read his threads, the more I think this is one very clever DBA or SQL Developer that's developing a case of pork chops to hit the idiots that management hired as contractors.  If that's true, then I'll also give him a bazillion thumbs up for classical DBA dry humor and appropriately used sarcasm.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden - Saturday, October 13, 2018 10:15 AM

    Jeff Moden - Saturday, October 13, 2018 9:42 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 3:54 AM

    frederico_fonseca - Saturday, October 13, 2018 3:12 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 2:50 AM

    I don't know where to start but this one is nothing but problems
    😎

    ahhhh.. and 

     I'm pretty good with SQL, so no newbs please. from (https://qa.sqlservercentral.com/Forums/2002689/Query-slow-Help-improve)

     

    I replied to 2 of his posts - might turn out to be "interesting" threads

    All I can see is a procedural programmer's approach to a set based problem, a problem waiting to happen.
    😎

    All I can see is an arrogant sot.  Most newbies could stuff him when it comes to writing database code.  I hope he learns his lesson from the code Frederico posted.

    Where do people with such uninformed arrogance come from?

    Either that or he's one very sly fox.  Consider the following scenario...

    Occam's Razor.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Saturday, October 13, 2018 11:43 AM

    Jeff Moden - Saturday, October 13, 2018 10:15 AM

    Jeff Moden - Saturday, October 13, 2018 9:42 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 3:54 AM

    frederico_fonseca - Saturday, October 13, 2018 3:12 AM

    Eirikur Eiriksson - Saturday, October 13, 2018 2:50 AM

    I don't know where to start but this one is nothing but problems
    😎

    ahhhh.. and 

     I'm pretty good with SQL, so no newbs please. from (https://qa.sqlservercentral.com/Forums/2002689/Query-slow-Help-improve)

     

    I replied to 2 of his posts - might turn out to be "interesting" threads

    All I can see is a procedural programmer's approach to a set based problem, a problem waiting to happen.
    😎

    All I can see is an arrogant sot.  Most newbies could stuff him when it comes to writing database code.  I hope he learns his lesson from the code Frederico posted.

    Where do people with such uninformed arrogance come from?

    Either that or he's one very sly fox.  Consider the following scenario...

    Occam's Razor.

    In most cases, I'd agree, Gail.  This one feels different by the wording being used.  I can "feel' the sarcasm as if he were trying to defend his position on all this stuff being stupid in a room full of determined idiot savants.  Like I said, I could be wrong but I think he might be a professional "Idiot Whisperer" as many DBAs have to be and he's using this forum (very well) to prove his point..

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • jasona.work - Friday, October 12, 2018 12:03 PM

    Hugo Kornelis - Friday, October 12, 2018 11:43 AM

    Brandie Tarvin - Friday, October 12, 2018 10:00 AM

    frederico_fonseca - Friday, October 12, 2018 9:46 AM

    Post gone - maybe one of the admins wishes to revive it  but I don't see any point on it.

    That would explain the weird hiccup I saw in Chris's last post. The thread is technically still there, but all of his posts are gone. Every last one.

    I was able to do a quick check on that thread a few hours back, but unable to post. I wanted to put in at least one more reply, not for the sake of the OP but for the sake of others in the thread, to correct some incorrect information I saw posted from others there. I believe most who were in that thread are here as well so I'll just post it here.

    There wa a question at one time of why writing to a permanent table took more time than writing to a temporary table, and I believe that someone said "basically yes" to the question whether this is because permanent tables start on disk and temporary tables start in memory. That is NOT correct. All tables (permanent, temporary, and even table variables) are stored in a database - tempdb in the case of the latter two. They are stored on 8K database pages. Writes to those pages, including the writes associating with creating the table, are done in memory (buffer pool) and the pages are marked dirty. A background checkpoint process occassionally wakes up, sweeps through memory to look for dirty pages and writes them to disk. The same happens when an explicit CHECKPOINT statement runs, or when the database is shut down.

    So all tables effectively start in memory. All tables remain only there until a checkpoint runs and causes the table's data pages to be written to disk. If the table happens to have already been deleted before a checkpoint ever runs (and due to the nature of how tables are used, that is far more likely for temporary tables than for permanent tables), then they might be removed before ever having a checkpoint, in which case they indeed are never written to disk. This can happen to temp tables and table variables that are short-lived, but if you create a permanent table and drop it fast enough the same happens. And conversely, if you use temp tables or table variables and then hold your connection open for long enough, they will be checkpointed as normal.

    As to the different execution times that the Mystery View Guy talked about, my suspicion is that the permanent table was made in a user database and hence subject to full logging (SQL Server needs info in the log file for doing a rollback but also for recovery if the DB goes down mid-transaction). The temporary table is by definition in tempdb which never uses recovery - so SQL Server all by itself reduces the amount of information it logs, keeping only what is needed for a rollback. And if you then switch from temp table to table variable, you get even less logging because those are not subject to transactions and hence never require a rollback either.
    (I also suspect that the example Grant posted, which I didn't actually look at, used hundreds or thousands of micro-transactions instead of embedding all code within a single transaction - which explains why the logging overhead was as big as it was. The log file uses an unbuffered write-ahead system when a transaction is committed. If you do a thousand writes and wait for confirmation from the disk system every time, you will notice the time it takes. If you use a single transaction, the overhead will be far less).

    You know, I believe in his original post, the "temp" tables were created as "regular" tables name "temptable1" and "temptable2"
    Note the lack of # to create actual temporary tables...

    Ah well, hindsight, etc, etc, etc...

    Just adding 2 things to this.

    Did try out Grant's code, single transaction on each of the 3 inserts. if the insert into the permanent table is changed to "with (tablockx)" is runs on similar time as the other 2 (temp and variable table). Otherwise it is 4-5 times slower as expected.

    and if anyone is on stackoverflow and also wishes to comment there he also posted there the same question https://stackoverflow.com/questions/52790300/sql-server-express-2017-insert-select-performance-issue
    and... https://codereview.stackexchange.com/questions/205483/sql-server-express-2017-insert-select-performance-issue but got put on hold immediately

  • frederico_fonseca - Saturday, October 13, 2018 6:10 PM

    jasona.work - Friday, October 12, 2018 12:03 PM

    Hugo Kornelis - Friday, October 12, 2018 11:43 AM

    Brandie Tarvin - Friday, October 12, 2018 10:00 AM

    frederico_fonseca - Friday, October 12, 2018 9:46 AM

    Post gone - maybe one of the admins wishes to revive it  but I don't see any point on it.

    That would explain the weird hiccup I saw in Chris's last post. The thread is technically still there, but all of his posts are gone. Every last one.

    I was able to do a quick check on that thread a few hours back, but unable to post. I wanted to put in at least one more reply, not for the sake of the OP but for the sake of others in the thread, to correct some incorrect information I saw posted from others there. I believe most who were in that thread are here as well so I'll just post it here.

    There wa a question at one time of why writing to a permanent table took more time than writing to a temporary table, and I believe that someone said "basically yes" to the question whether this is because permanent tables start on disk and temporary tables start in memory. That is NOT correct. All tables (permanent, temporary, and even table variables) are stored in a database - tempdb in the case of the latter two. They are stored on 8K database pages. Writes to those pages, including the writes associating with creating the table, are done in memory (buffer pool) and the pages are marked dirty. A background checkpoint process occassionally wakes up, sweeps through memory to look for dirty pages and writes them to disk. The same happens when an explicit CHECKPOINT statement runs, or when the database is shut down.

    So all tables effectively start in memory. All tables remain only there until a checkpoint runs and causes the table's data pages to be written to disk. If the table happens to have already been deleted before a checkpoint ever runs (and due to the nature of how tables are used, that is far more likely for temporary tables than for permanent tables), then they might be removed before ever having a checkpoint, in which case they indeed are never written to disk. This can happen to temp tables and table variables that are short-lived, but if you create a permanent table and drop it fast enough the same happens. And conversely, if you use temp tables or table variables and then hold your connection open for long enough, they will be checkpointed as normal.

    As to the different execution times that the Mystery View Guy talked about, my suspicion is that the permanent table was made in a user database and hence subject to full logging (SQL Server needs info in the log file for doing a rollback but also for recovery if the DB goes down mid-transaction). The temporary table is by definition in tempdb which never uses recovery - so SQL Server all by itself reduces the amount of information it logs, keeping only what is needed for a rollback. And if you then switch from temp table to table variable, you get even less logging because those are not subject to transactions and hence never require a rollback either.
    (I also suspect that the example Grant posted, which I didn't actually look at, used hundreds or thousands of micro-transactions instead of embedding all code within a single transaction - which explains why the logging overhead was as big as it was. The log file uses an unbuffered write-ahead system when a transaction is committed. If you do a thousand writes and wait for confirmation from the disk system every time, you will notice the time it takes. If you use a single transaction, the overhead will be far less).

    You know, I believe in his original post, the "temp" tables were created as "regular" tables name "temptable1" and "temptable2"
    Note the lack of # to create actual temporary tables...

    Ah well, hindsight, etc, etc, etc...

    Just adding 2 things to this.

    Did try out Grant's code, single transaction on each of the 3 inserts. if the insert into the permanent table is changed to "with (tablockx)" is runs on similar time as the other 2 (temp and variable table). Otherwise it is 4-5 times slower as expected.

    and if anyone is on stackoverflow and also wishes to comment there he also posted there the same question https://stackoverflow.com/questions/52790300/sql-server-express-2017-insert-select-performance-issue
    and... https://codereview.stackexchange.com/questions/205483/sql-server-express-2017-insert-select-performance-issue but got put on hold immediately

    He's feeling terribly abused over at the Microsoft thread.

    "Thanks for the sympathy. I really need it after being personally attacked so many times by the Gurus. I appreciate you for spending time reading the thread once again. Please post the metrics of the result. It only takes 5 minutes to test."

    This after he got mad at me for telling everyone that he was causing problems over at SSC and he replied "Live with the bug, it will bite you one day." (his emphasis. He bolded the whole reply.)

    He's not happy at getting called out by everyone, even over at the MS thread.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin - Sunday, October 14, 2018 5:01 AM

    frederico_fonseca - Saturday, October 13, 2018 6:10 PM

    jasona.work - Friday, October 12, 2018 12:03 PM

    Hugo Kornelis - Friday, October 12, 2018 11:43 AM

    Brandie Tarvin - Friday, October 12, 2018 10:00 AM

    frederico_fonseca - Friday, October 12, 2018 9:46 AM

    Post gone - maybe one of the admins wishes to revive it  but I don't see any point on it.

    That would explain the weird hiccup I saw in Chris's last post. The thread is technically still there, but all of his posts are gone. Every last one.

    I was able to do a quick check on that thread a few hours back, but unable to post. I wanted to put in at least one more reply, not for the sake of the OP but for the sake of others in the thread, to correct some incorrect information I saw posted from others there. I believe most who were in that thread are here as well so I'll just post it here.

    There wa a question at one time of why writing to a permanent table took more time than writing to a temporary table, and I believe that someone said "basically yes" to the question whether this is because permanent tables start on disk and temporary tables start in memory. That is NOT correct. All tables (permanent, temporary, and even table variables) are stored in a database - tempdb in the case of the latter two. They are stored on 8K database pages. Writes to those pages, including the writes associating with creating the table, are done in memory (buffer pool) and the pages are marked dirty. A background checkpoint process occassionally wakes up, sweeps through memory to look for dirty pages and writes them to disk. The same happens when an explicit CHECKPOINT statement runs, or when the database is shut down.

    So all tables effectively start in memory. All tables remain only there until a checkpoint runs and causes the table's data pages to be written to disk. If the table happens to have already been deleted before a checkpoint ever runs (and due to the nature of how tables are used, that is far more likely for temporary tables than for permanent tables), then they might be removed before ever having a checkpoint, in which case they indeed are never written to disk. This can happen to temp tables and table variables that are short-lived, but if you create a permanent table and drop it fast enough the same happens. And conversely, if you use temp tables or table variables and then hold your connection open for long enough, they will be checkpointed as normal.

    As to the different execution times that the Mystery View Guy talked about, my suspicion is that the permanent table was made in a user database and hence subject to full logging (SQL Server needs info in the log file for doing a rollback but also for recovery if the DB goes down mid-transaction). The temporary table is by definition in tempdb which never uses recovery - so SQL Server all by itself reduces the amount of information it logs, keeping only what is needed for a rollback. And if you then switch from temp table to table variable, you get even less logging because those are not subject to transactions and hence never require a rollback either.
    (I also suspect that the example Grant posted, which I didn't actually look at, used hundreds or thousands of micro-transactions instead of embedding all code within a single transaction - which explains why the logging overhead was as big as it was. The log file uses an unbuffered write-ahead system when a transaction is committed. If you do a thousand writes and wait for confirmation from the disk system every time, you will notice the time it takes. If you use a single transaction, the overhead will be far less).

    You know, I believe in his original post, the "temp" tables were created as "regular" tables name "temptable1" and "temptable2"
    Note the lack of # to create actual temporary tables...

    Ah well, hindsight, etc, etc, etc...

    Just adding 2 things to this.

    Did try out Grant's code, single transaction on each of the 3 inserts. if the insert into the permanent table is changed to "with (tablockx)" is runs on similar time as the other 2 (temp and variable table). Otherwise it is 4-5 times slower as expected.

    and if anyone is on stackoverflow and also wishes to comment there he also posted there the same question https://stackoverflow.com/questions/52790300/sql-server-express-2017-insert-select-performance-issue
    and... https://codereview.stackexchange.com/questions/205483/sql-server-express-2017-insert-select-performance-issue but got put on hold immediately

    He's feeling terribly abused over at the Microsoft thread.

    "Thanks for the sympathy. I really need it after being personally attacked so many times by the Gurus. I appreciate you for spending time reading the thread once again. Please post the metrics of the result. It only takes 5 minutes to test."

    This after he got mad at me for telling everyone that he was causing problems over at SSC and he replied "Live with the bug, it will bite you one day." (his emphasis. He bolded the whole reply.)

    He's not happy at getting called out by everyone, even over at the MS thread.

    Do you have the link to the Microsoft thread?  I would like to see it.

  • Lynn Pettis - Sunday, October 14, 2018 9:15 AM

    Do you have the link to the Microsoft thread?  I would like to see it.

    He he, be my guest!
    😎

    This prat doesn't know that this is a simple lock escalation issue, add the tablockx hint and it goes away. Suggest you ignore with a non-respond, otherwise it's fuel on the fire.

Viewing 15 posts - 62,611 through 62,625 (of 66,000 total)

You must be logged in to reply to this topic. Login to reply