CPU Performance and Hyperthreading

  • Does anyone have experience with hyperthreaded CPUs under heavy load?  Can SQL running on hyperthreaded CPUs hit high (>90%) CPU utilization for extended periods of time?

    I ask because in an experiment recently one of our developers ran a program on his single hyperthreaded processor development machine consisting of a tight loop doing completely useless work.  His desktop immediately slowed to a crawl which isn't at all surprising given the inefficiency of the application.  What was surprising is that Task Manager on his box said that the system was at 50% utilization.  One CPU was pegged and the other wasn't doing anything hence 50%.  However, his system wasn't behaving like a system at 50% it was behaving like a system that was at or near 100% utilization.  Screen refreshes were slow, opening Task Manager was a chore, you know the drill.

    As a DBA of a 4-way hyperthreaded SQL Server, I'm concerned that the CPU stats I've been monitoring aren't telling me the whole story about my server's health.  Is it possible for my server to end up in a similar situation... horrendously slow and effectively CPU-bound but only showing 50% utilization?

    I'm blessed in that our server's normal workload only pushes the CPU utilization to 25% or so.  Even under heavy load, I've never seen it get beyond 40% utilization.  That's a good thing and I'm not compaining but I'm all of a sudden concerned that I may me a whole lot closer to "maxing out" my CPUs than my metrics would indicate.  Can anyone lay my mind to rest?  Thanks.

    Note: Both the developer's box and my production SQL Server are running Windows 2000.

    Gutz

  • None of my experience with hyperthreading has been on a SQL server.  I have, however, tested it on various application servers.  What I found is that hyperthreading had little to no effect on time trials until I was pushing the CPU to >90% (closer to 100%).  However, once the CPU was being pushed to about 100%, hyperthreading did make a very noticible difference in shaving off some execution time per batch process.

    Now, that being said, I'm not sure that I would ever have my SQL server hyperthread as my testing above would require that the CPU be running close to 100% in order to see an advantage.  In this situation, I would be looking into a processor upgrade.

    Anyway, I'm assuming that you don't have this option.  To provide you with a possible answer to your workstation problem - maybe the issue was with context switching.  If context switching gets out of hand on a SMP box, the switches can downgrade performance to that of less than a 1 CPU machine.  I would suggest doing the following:

    -Read up on context switching and how hyperthreading handles this.

    -Run this test on the workstation with PerfMon running a counter on context switching.  You may find that it is WAY out of control.

    Good Luck.

  • We are running a physical dual 1.8 and a dual 2.8 with hyperthreading. SQL2K standard edition. It looks like 4 processors to the server. We haven't had any problems.

    On the SQL Server Properties make sure it is configured to "Use all available processors".

    Just note that sometimes hyperthreading may only buy you a 30-40% increase in total CPU availablity just by the nature of the beast.



    ----------------
    Jim P.

    A little bit of this and a little byte of that can cause bloatware.

  • You might have to consider that a Hyper Threaded processor is not really a dual, but a dual pipelined single processor.  With some tricks to allow it to use that single core more efficiently.  ie there is a lot of wasted processor time while fetching and preprocessing.

    So depending on the application, and I would think a tight loop would fall into this category, HT will have little effect.  It is going to be most usefull on a diverse appliciation mix.

    Somewhere, and I believe I found it initially up here in a thread about Hyper Threading, was link to a very technical (I love those) document on how and why HT works.

     

     


    KlK

  • Jim,

    Thanks for your comments.  I'm very familiar with the "Use all available processors" setting and have found that checking this option produces bad effects in my environment with my workload.  The "Use all available processors" setting isn't referring to how many processors SQL Server can use (that setting is just above in the Processor Control section of the Tab) but is instead asking how many processors can be used for parallel query execution.

    That means that when "Use all available processors" is set and the server is confronted with a query that it can process well using parallel query execution that the server MAY decide to use every processor for just that one query.  I initially had my server configured with that setting and found that I would get large, prolonged CPU spikes which I finally tracked down to parallel query execution.  The recommendation I have heard most often (and the one I use in my Production environment now) is to set the number of processors available to use for parallel query execution equal to the number of physical (not logical) processors in your server.  That's the way I'm running on my 4x2.8GHZ Production servers and my CPU spikes have disappeared.

    BTW - I think 30-40% is generous, the numbers I've most often seen thrown around are 10-20% increase but it will certainly have a lot to do with your server's workload and 30-40% isn't unthinkable just high in my opinion.

    Steve

  • I appreciate your feeback.  Knowing that you didn't hit a "glass ceiling" induced by hyperthreading on your application server makes me feel a whole lot better about the health of my SQL Server.

    Gutz

  • It also depends on what you are doing with SQL (duh!, I know).  Anyway,  we use a 4x Proliant (HT to 8x) for a datawarehouse SQL server using DTS.  Well having 8 virtual threads means I can run 8 DTS packages at the same time.  If you take the I/O bottleneck out of the way (we purchased a SAN) then the time to load our SQL database went down by way more than 30%.  So in this scenario the HT really works well for us (and yes SQL does use it, and no bad effects to date by pegging the processors on a regualr basis).

     

  • Your comment of: "The recommendation I have heard most often (and the one I use in my Production environment now) is to set the number of processors available to use for parallel query execution equal to the number of physical (not logical) processors in your server." is quite interesting.

    I've been wondering this myself, and have come to this conclusion too. Unfortunately, I haven't been able to track down enough evidence or proof of this to be confident in making this change to a production box. Even more unfortunate, I don't have much time to test and really find out for myself That said, is there any discussion or article you could point me to that would help solidify this? I'm about to roll out a 4 physical processor box, and I want to make sure I have it configured as tight as can be. Interesting stuff, thanks.


    -Ken

  • Ken,

    I can't point you to any whitepaper or KB article that makes recommendations about the Max Degrees of Parallelism (MAXDOP) settings because I've never seen one.  I've learned about MAXDOP primarily through groups.google.com

    I just did a search for [ "SQL Server" MAXDOP physical ] and sorted the results by date.  There are some smart folks out there who recommend setting MAXDOP to the number of physical processors in your server.  There also smart enough to say that results can vary depending on your workload but I think it's a good place to start.

    If it makes you feel any better, my production server has 4 physical and 8 logical processors.  My workload consists of about a 70/30 mix of OLTP type queries (70) to long running, parallel query candidates (30).  Originally, I left the MAXDOP setting at the default (all processors) and would semi-regularly have a rash of slow running queries caused by a single parallel query pegging 5-7 of the logical processors and leaving very little CPU resources for other processes.  After changing the MADOP setting to 4, this problem disappeared.

    Steve

  • I have production servers running with HT and am about to upgrade to to 8 way + HT ( = 16  ). I can't say I've seen any real problems. It's worth looking out for cxpacket locks which are an indication of parallelism ( sure I've spelled that incorrectly! )

    It does depend upon circumstance, and it's amazing how better indexing sometimes makes parallel queries go away < grin >  I must admit to not running proc loads anywhere near 90% so its difficult to say. To me HT has always been an advantage.

    To the original post I remember a developer who was trying to prove how much better dbase was compared to SQL Server running a looped update of a local variable in QA ( or something very similar ) which actually brought a workstation to its knees ( the same query run in a couple of seconds in dbase )

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Hello,

    I've seen this on my server. The CPU utilization of sqlservr.exe doesn't exceed 50%.

    Look at http://www.microsoft.com/windows2000/server/evaluation/performance/reports/hyperthread.asp

    In Appendix it shows that Sql Server 2000 Enterprise Edition SP3 support only 1 logical processor on systems with 1 physical processor??? Is yours Sql Server Enterprise Edition SP3? Maybe the problem is this?

    Sorry my poor English.

    Saulo Falcão

  • Sorry, the correct URL for the above is:

    http://www.microsoft.com/sql/howtobuy/SQLonHTT.doc

    Thanks

    Saulo Falcao

  • Saulo,

    Thanks for the reply.  I can only assume that the document you referenced must have a bug.  I suspect that it should have said 1 physical and 2 logical under the 1-Way column for SQL Server 2000 EE SP3.

    In any case I have a 4-Way system which the document says that SQL Server 2000 SP3 EE can (and does) support 8 logical processors.

    I've come to the conclusion that there is no "glass ceiling" as I was concerned about and that my server utilization should be able to climb fully to 100% if my workload demands it.

    Gutz

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply