TEMPDB on RAM

  • I actually initially did not want to continue in this thread because I do not see there are any fundamental differences between everyone's views (ie. making tempdb quicker is a good thing no matter how you make it, in ram or otherwise). But I still see some of us are lacking the reality check.

    There are many 3rd party business applications (esp. some big names in ERP domain) that have so poor design from both code and data architect design. As DBAs, we can only go so far to do the optimization (server instance / hardware / index / statistics etc), also the time spent on such optimization is tremendous, no, not tens of thousands man-hours but perhaps tens of thousands man-days (from all those 3rd party end users perspective).

    No one will deny the fact that nice/elegant/efficient code and data architect design should be the #1 priority, but again, that is a good wish in and of itself. We still need to handle the brutal reality where we spend lots of time either trying to mitigate the impact of the bad application performance without hardware upgrade or searching the optimized/most economic way to solve the problem.

    If we look at the history, hardware keeps on growing better and better, yet the requirement of software capacity always outgrows the the hardware capacity. If MS really had "TempDB in RAM" technology today (no, it is not an easy technology I believe, when tempdb auto growth and ACID properties are taken into consideration), it would have an immediate impact to thousands of those "not-so-decent-yet-of-high-business-value" 3rd party applications.

    Time will tell whether "TempDB in RAM" is a good idea and time will tell whether it will be a selling-point in future SQL Server marketing materials.

  • sql4gnt (3/17/2014)


    We ran some tests just at the storage level and the RamDisk was 13x faster. Here are the CrystalDiskMark results

    8x15k SAS Drives configured as RAID 1+0 and 100 Write Caching yielded:

    Random Write 4KB (QD=32) : 29.483 MB/s [ 7197.9 IOPS]

    RamDisk

    Random Write 4KB (QD=32) : 389.149 MB/s [ 95007.2 IOPS]

    Interesting.

    I see:

    1xConsumer SATA drive

    Random Write 4KB (QD=32) : 41.64 MB/s [ 10165.6 IOPS]

    Ramdisk (ImDisk) on desktop

    Random Write 4KB (QD=32) : 1049 MB/s [256242.4 IOPS]

    FYI, SQLIO shows numbers 50% higher even using two 512MB test files on the ramdisk rather than just one.

  • Nadrek (3/17/2014)


    sql4gnt (3/17/2014)


    We ran some tests just at the storage level and the RamDisk was 13x faster. Here are the CrystalDiskMark results

    8x15k SAS Drives configured as RAID 1+0 and 100 Write Caching yielded:

    Random Write 4KB (QD=32) : 29.483 MB/s [ 7197.9 IOPS]

    RamDisk

    Random Write 4KB (QD=32) : 389.149 MB/s [ 95007.2 IOPS]

    Interesting.

    I see:

    1xConsumer SATA drive

    Random Write 4KB (QD=32) : 41.64 MB/s [ 10165.6 IOPS]

    Ramdisk (ImDisk) on desktop

    Random Write 4KB (QD=32) : 1049 MB/s [256242.4 IOPS]

    FYI, SQLIO shows numbers 50% higher even using two 512MB test files on the ramdisk rather than just one.

    Yikes. I did 9 passes with CrystalDiskMark on 2GB files. What options did you use? My screenshots for this test are at http://www.codepimp.org/2014/02/cache-money/

    What options did you use with SQLIO? I do about 3 passes with a plethora of different options for but

    8 threads reading for 120 secs using 8KB random IOs

    For SQLIO I ended up using a 100GB test file on the RAID 1+0

    IOs/sec: 3915.31

    MBs/sec: 30.58

    and only a 3GB test file for the tempDB (I was using the free RamDisk at the time, but I might be able to get a test with a 100GB RamDisk file)

    IOs/sec: 314279.72

    MBs/sec: 2455.31

    Why is it that people who can't take advice always insist on giving it? - James Bond, Casino Royale

  • sql4gnt (3/17/2014)


    Yikes. I did 9 passes with CrystalDiskMark on 2GB files. What options did you use? My screenshots for this test are at http://www.codepimp.org/2014/02/cache-money/

    What options did you use with SQLIO? I do about 3 passes with a plethora of different options for but

    8 threads reading for 120 secs using 8KB random IOs

    For SQLIO I ended up using a 100GB test file on the RAID 1+0

    IOs/sec: 3915.31

    MBs/sec: 30.58

    and only a 3GB test file for the tempDB (I was using the free RamDisk at the time, but I might be able to get a test with a 100GB RamDisk file)

    IOs/sec: 314279.72

    MBs/sec: 2455.31

    ImDisk ramdisk, 2GB in size, a modern i7 machine.

    PortableApps.com version of CrystalDiskmark, options "5" and "1GB" - I never considered it a good benchmarking tool, so I only used it to compare with yours, since your results were so low.

    My sqlio was very primitive, and built to mimic the CrystalDiskMark results - personally, for better results I'd try both sequential and random and at least -b8, -b64, and -b256, probably with -o4 to -o16 or so.

    sqlio -kW -s6 -frandom -o32 -b4 -LS -Fram.txt timeout /T 8

    sqlio -kR -s6 -frandom -o32 -b4 -LS -Fram.txt timeout /T 8

    with a text file of:

    #PathFilename ThreadsForThatFile Mask FileSizeInMB

    R:\t1.dat 1 0x0 512

    R:\t2.dat 1 0x0 512

  • We should start a new thread for discussing SQLIO (totally didn't mean to hijack the original) 🙂

    I run SQLIO for read and write, sequential and random, and 8k and 64k. Then I'll take an average of the 3 runs unless there is something that is completely way off, in which case I'll run it again and /or investigate

    Why is it that people who can't take advice always insist on giving it? - James Bond, Casino Royale

  • jeffrey yao (3/17/2014)


    I actually initially did not want to continue in this thread because I do not see there are any fundamental differences between everyone's views (ie. making tempdb quicker is a good thing no matter how you make it, in ram or otherwise). But I still see some of us are lacking the reality check.

    There are many 3rd party business applications (esp. some big names in ERP domain) that have so poor design from both code and data architect design. As DBAs, we can only go so far to do the optimization (server instance / hardware / index / statistics etc), also the time spent on such optimization is tremendous, no, not tens of thousands man-hours but perhaps tens of thousands man-days (from all those 3rd party end users perspective).

    No one will deny the fact that nice/elegant/efficient code and data architect design should be the #1 priority, but again, that is a good wish in and of itself. We still need to handle the brutal reality where we spend lots of time either trying to mitigate the impact of the bad application performance without hardware upgrade or searching the optimized/most economic way to solve the problem.

    If we look at the history, hardware keeps on growing better and better, yet the requirement of software capacity always outgrows the the hardware capacity. If MS really had "TempDB in RAM" technology today (no, it is not an easy technology I believe, when tempdb auto growth and ACID properties are taken into consideration), it would have an immediate impact to thousands of those "not-so-decent-yet-of-high-business-value" 3rd party applications.

    Time will tell whether "TempDB in RAM" is a good idea and time will tell whether it will be a selling-point in future SQL Server marketing materials.

    MS actually does have "TempDB in RAM" today. It defaults to that and only spills to disk if it gets too large for a given session. That's actually why they removed the option of "TempDB in Ram".

    I've also found that most flavors of 3rd party software don't actually use TempDB that much. It's just seriously crap code that's responsible for performance issues.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden (3/18/2014)


    MS actually does have "TempDB in RAM" today. It defaults to that and only spills to disk if it gets too large for a given session. That's actually why they removed the option of "TempDB in Ram".

    I've also found that most flavors of 3rd party software don't actually use TempDB that much. It's just seriously crap code that's responsible for performance issues.

    Agree with Jeff on this - all work is done in memory and TempDB work only spills to-disc if there is memory pressure whilst carrying out the work (such as spooling and hash warnings) - granted it's a bit more complicated that that but you get the gist.

    When previously working for an Online Video Gaming company running some of the top MMORPG games at the time and my boss had read an article regarding another well-known MMO game who were using RamSAN to house TempDB to improve game performance. He was happy to pay £50k + to get a 32-GB RamSAN (if I remember right - yep, its a little while ago) for one of our top game's DB Server. The problem was...we had no pressure on TempDB, which I was able to prove. He still went ahead and bought one and guess what...no change in performance - and of course my boss then wanted to know why it hadn't helped and what could we do to 'make it help' :crazy:.

    So, you need to know if TempDB is having problems and/or is a performance bottleneck before you can decide on what options may help alleviate the problem - such as multiple TempDB data files, uniform extent allocation, TempDB files on SSD/RAM, etc. If TempDB is not under pressure and/or a performance bottleneck, then I doubt that putting TempDB files on SSD/RAM would improve performance.

    HTH

  • MS actually does have "TempDB in RAM" today. It defaults to that and only spills to disk if it gets too large for a given session. That's actually why they removed the option of "TempDB in Ram".

    I've also found that most flavors of 3rd party software don't actually use TempDB that much. It's just seriously crap code that's responsible for performance issues.

    MS has "TempDB in RAM"? Maybe we can say the sql server 2014's In-Memory OLTP technology is a crap because most of the time data is uploaded to memory and manipulated there already since SQL Server 2000 or earlier.

    As for your observations that tempdb is not used much in 3rd party software, I have to say you are really lucky. We may need to review where tempdb is used: http://technet.microsoft.com/en-us/library/ms345368(v=sql.105).aspx

    Again, I totally agree 99% of time, it is crappy code that is the culprit of performance issues. But again, I have to say, we have to live with what we have and whenever possible, using technology advancement to overcome/compromise the "bad things" in the real world.

    Heck, what are we arguing about? 😀

  • +1 on Jeff's comments as well. This topic of bad code seems to be cropping up more and more. If I had the best of all worlds I'd have developers writing, testing and performance tuning on inferior servers. This will allow the developer to "feel their own pain" with the hope it drives for better design. Unfortunately that isn't always the case. One of my previous DBA positions had a development environment that was running on the oldest equipment. One of the hot-shot developers learned quite a number of lessons in code design as a result of the "inferior" platform. So that concept does go a long way.

    One of the other issues/problems dealing with aging database designs is there is various degrees of development expertise. If no one is "watching the hen house" database designs will start to show their signs of bloat.

    My current situation is rather a unique one. My company is hosting a third-party solution that manages the day to day operations of the core business. Lots of customization to the core code took place with no regard to how it would affect overall performance. Add to that a bunch of consultants contributing to the customization development effort with various degrees of development expertise. I've jumped into this game too late to take control over the mess that was left behind. I have started my first level effort of performance tuning, closing watching poorly running code and tweaking it so it performs better and adding missing indexes & statistics. This is the phase I like to refer as "putting lipstick on a pig". When we take delivery of the final code changes I will then go into my second phase of performance tuning, re-engineering poor code removing a lot of the REBAR code design.

    I've been too many systems for too long and have come to realize when it comes time to gain greater performance in the shortest amount of time, throw hardware dollars at the problem. It doesn't fix the problem but just "makes the pig look good". The longer term approach, which is the better one is to do as Jeff pointed out, fix the bad code, teach the code developers better techniques in development so the lonely DBA doesn't get hammered with badly written poor performing SQL code.

    Kurt

    Kurt W. Zimmerman
    SR DBA
    Lefrak Organization
    New York, NY

    http://www.linkedin.com/in/kurtwzimmerman

  • "putting lipstick on a pig", haha.. LOL, very well said Kurt.

    Indeed, throwing more resources (hardware or better technology, like upgrading from standard to enterprise edition) is usually the easiest and quickest one.

    To me, it is the driving force behind technology advancement, i.e. better technology infrastructure to tolerate the various negative factors (poor code quality, huge amount of data, complex computation, insufficient human resource/expertise etc etc)

    Quite often, the cost to fix the code (either teaching people to do right in the first place or fixing code ourselves) outweighs the cost to upgrading the technology infrastructure.

    Heck, we are out of topic again (i.e. tempdb in ram) 😛

  • jeffrey yao (3/19/2014)


    MS actually does have "TempDB in RAM" today. It defaults to that and only spills to disk if it gets too large for a given session. That's actually why they removed the option of "TempDB in Ram".

    I've also found that most flavors of 3rd party software don't actually use TempDB that much. It's just seriously crap code that's responsible for performance issues.

    MS has "TempDB in RAM"? Maybe we can say the sql server 2014's In-Memory OLTP technology is a crap because most of the time data is uploaded to memory and manipulated there already since SQL Server 2000 or earlier.

    As for your observations that tempdb is not used much in 3rd party software, I have to say you are really lucky. We may need to review where tempdb is used: http://technet.microsoft.com/en-us/library/ms345368(v=sql.105).aspx

    Again, I totally agree 99% of time, it is crappy code that is the culprit of performance issues. But again, I have to say, we have to live with what we have and whenever possible, using technology advancement to overcome/compromise the "bad things" in the real world.

    Heck, what are we arguing about? 😀

    What are we arguing about? Heh... that's easy... you keep saying things like "using technology advancement to overcome/compromise the "bad things" in the real world" and I keep trying to say that's not as effective as folks would have you believe when it comes to hardware and that people shouldn't think they can fix crap code by just throwing hardware at it. 🙂

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • jeffrey yao (3/19/2014)


    "putting lipstick on a pig", haha.. LOL, very well said Kurt.

    Indeed, throwing more resources (hardware or better technology, like upgrading from standard to enterprise edition) is usually the easiest and quickest one.

    To me, it is the driving force behind technology advancement, i.e. better technology infrastructure to tolerate the various negative factors (poor code quality, huge amount of data, complex computation, insufficient human resource/expertise etc etc)

    Quite often, the cost to fix the code (either teaching people to do right in the first place or fixing code ourselves) outweighs the cost to upgrading the technology infrastructure.

    Heck, we are out of topic again (i.e. tempdb in ram) 😛

    See? That's what we're arguing about. 😉 A lot of people, yourself included it appears, only consider the cost of buying new hardware.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • That reminds me of yet another position I had. They had some code that had been around for a very long time. I believe it originated on a SQL 7 or if lucky SQL 2000. None the less it was a system that worked regardless of the fact that when running some of the core pieces of SQL code the system would be brought to its knees.

    Because there was only a single DBA (my predecessor then me) and due to the fact that IT WORKED there was no incentive to make it work better other than throwing more hardware at it. This system had seen several hardware upgrades solely for the purpose to make it run faster.

    I had the opportunity to dig into the code to tidy it up (my secondary phase of performance tuning). With a few well placed indexes, rewriting the code to incorporate CTEs instead of dozens of unindexed temp tables to name a few l was able to get processes that took over 1 hour to run down to less than 30 seconds.

    What made matters worse, this was a very large box, 64 cores, 128 gb memory with a ram-disk for tempdb and the code still performed poorly. With this kind of horsepower I can even imagine how long it took when it was first written almost 14 years prior.

    So the real world here is, unless someone at a very high level complains about performance working systems will continue to run regardless how badly written they are these systems will be there and will continue to perform poorly.

    That's my spin on it....

    Kurt

    Kurt W. Zimmerman
    SR DBA
    Lefrak Organization
    New York, NY

    http://www.linkedin.com/in/kurtwzimmerman

  • Jeff Moden (3/19/2014)


    jeffrey yao (3/19/2014)


    "putting lipstick on a pig", haha.. LOL, very well said Kurt.

    Indeed, throwing more resources (hardware or better technology, like upgrading from standard to enterprise edition) is usually the easiest and quickest one.

    To me, it is the driving force behind technology advancement, i.e. better technology infrastructure to tolerate the various negative factors (poor code quality, huge amount of data, complex computation, insufficient human resource/expertise etc etc)

    Quite often, the cost to fix the code (either teaching people to do right in the first place or fixing code ourselves) outweighs the cost to upgrading the technology infrastructure.

    Heck, we are out of topic again (i.e. tempdb in ram) 😛

    See? That's what we're arguing about. 😉 A lot of people, yourself included it appears, only consider the cost of buying new hardware.

    No, no, no. I am not an addict to hardware upgrade. I actually prefer new technology from more "software" side to mitigate the problems (so TempDB in RAM is a candidate in my list for sure). Coding fix is useful when we are talking about a few pain-points, but technology infrastructure change may have a far-more reaching impact to the whole system's well-being.

    We should ack that hardware cannot solve everything, neither can software.

    Life is short, we need more time to explore other cool stuff in sql server domain (to benefit a more broader scope of stakeholders) than just fixing code, esp when it can be fixed by some upgrade (hardware or software). 😀

  • jeffrey yao (3/19/2014)


    Jeff Moden (3/19/2014)


    jeffrey yao (3/19/2014)


    "putting lipstick on a pig", haha.. LOL, very well said Kurt.

    Indeed, throwing more resources (hardware or better technology, like upgrading from standard to enterprise edition) is usually the easiest and quickest one.

    To me, it is the driving force behind technology advancement, i.e. better technology infrastructure to tolerate the various negative factors (poor code quality, huge amount of data, complex computation, insufficient human resource/expertise etc etc)

    Quite often, the cost to fix the code (either teaching people to do right in the first place or fixing code ourselves) outweighs the cost to upgrading the technology infrastructure.

    Heck, we are out of topic again (i.e. tempdb in ram) 😛

    See? That's what we're arguing about. 😉 A lot of people, yourself included it appears, only consider the cost of buying new hardware.

    No, no, no. I am not an addict to hardware upgrade. I actually prefer new technology from more "software" side to mitigate the problems (so TempDB in RAM is a candidate in my list for sure). Coding fix is useful when we are talking about a few pain-points, but technology infrastructure change may have a far-more reaching impact to the whole system's well-being.

    We should ack that hardware cannot solve everything, neither can software.

    Life is short, we need more time to explore other cool stuff in sql server domain (to benefit a more broader scope of stakeholders) than just fixing code, esp when it can be fixed by some upgrade (hardware or software). 😀

    Heh... I agree. Some really cool stuff is spending 20 minutes with the developers to learn to write better code and not needing a Cray to join two tables or find a date. 😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 15 posts - 16 through 29 (of 29 total)

You must be logged in to reply to this topic. Login to reply