Performance 2008 vs 2005

  • We have a production environment for data warehousing and reporting consisting of a Windows Server 2003 with SQL Server 2005 Enterprise Edition, both with latest service packs. The server hardware is Intel x5650 2 processors 2.67GHz 4GB RAM. Operating System is 32 bit. SQL 32 bit, too.

    We have a new server that we're setting up as a data warehouse and reporting server that has Windows Server 2008 and SQL Server 2008 R2 Enterprise Edition. Both have latest service packs. The server hardware is also Intel x5650 2 processors 2.67 GHz but this new server has 24 GB RAM and the O/S and SQL Server are 64 bit .

    We have Analysis Services and Reporting Services installed on both servers.

    My problem is we were expecting to see significant improvement with the new server, both because it was full 64 bit O/S and SQL and also because it has six times the ram. But a large analysis cube and report built on this cube run about the same speed and some areas are even slower on the new server.

    I did not set either server up and I also did NOT configure SQL Server on either server.

    My question is: how do I go about determining why the new server is not performing? Where do I look? What kinds of things do I check for? I need specifics here, since this is pretty new to me.

    Thanks in advance.

  • this level of performance tuning is not simple, as you're discovering. I'm currently investigating a possible upgarde for a production system - I posted asking about fastest servers and there was only a minimal response!

    x64 generally improves things but brings about its own problems if you're not careful, the main gain is much better memory access, however what caught many is that in x64 world almost every application/process can use all the memory, I remeber a team struggling with SSIS packages which took all the server memory when they ran!

    With so much running on the server I'd hope that you don't have lock pages in memory set. I also hope that you have set a max and min memory for sql server. You'll really need to monitor your resources to make sure you don't get contention. Possibly, and I have a similar issue, cpu clock speed is what limits your performance, many servers now have almost stupid amounts of cores/threads but dropping the cpu speed and increasing cores doesn't automatically mean everything will run quicker, in a few tests we saw that thing can run much slower - so you've not increased your cpu speed so I'd not expect your cube to build quicker. Slower probably means a bottleneck somewhere else - I'd suspect memory.

    Overall for SQL Server I've found windows 2008/sql 2008 ( and R2 ) run faster than w2k3+sql 2005 but I don't have analysis services other than part of sharepoint so i can't make a comment there.

    Faster memory should improve throughput of in memory operations - but you need to monitor the entire package to see what's being used.

    I'd normally point out that you should leave a considerable amount of memory unallocated ( for the o/s ) with 24GB of ram I'd suggest at least 4GB of memory for the o/s - monitor free memory (MB) - it isn't really free but make sure you have sufficient memory. As I said x64 allows all things to have memory - watch your system cache - this can use lots of memory, you may want to install a specific service to manage this - make sure your advanced options on windows are set to programs not system cache.

    It's a subject area which can't really be covered in a forum as there are just so many options/variations - you must monitor which means lots of perfmons to see what is happening. Hope this helps to give you some pointers.

    ( I did work for several clients who found moving to x64 with sql 2005 disn't quite work as they expected 🙂 )

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • First I would check how the server is running at a basic level checking the following counters in perfmon:

    Hardware Resources

    The counters required are:

    • Memory: Pages/Sec

    • Memory: Available Bytes

    • Pyhsical Disk: %Disk Time

    • Processor: %Processor Time

    Network Usage

    The counters required are:

    • Network Interface: bytes Total\sec

    • Network Interface: Ouput Queue Length

    Operating System Activity

    The counters are:

    • System: Processor Queue Length

    • Process: %Processor Time (Various instances of naming SQL ServerService)

    Server Availability

    The counters are:

    • SQL Server: Buffer Manager: Buffer Cache Hit Ratio

    • SQL Server: General Statistics: User Connections

    • SQL Server: Databases: Transactions/sec (Gateway_live Database)

    • SQL Server: Access Methods: Full Scans/sec

    Query Performance:

    The counters are:

    • SQL Server Locks: Average Wait Times

    • SQL Server Locks: Number of Deadlocks/sec

    • SQL Server Locks: Lock Timeouts/sec

    Personally I write all this data to a db for further analysis:

    1. Open Performance Monitor (Perfmon.exe)

    2. Expand “Performance Logs and Alerts”

    3. Select “Counter logs” in the left hand side pane

    4. In the right hand side pane right click and select the option “New Log Settings”

    5. In the popup window that appears Enter a suitable name i.e. Baseline- Server Availability, and click the OK button

    6. Using the “Add Counters” button and the counters detailed above

    7. Set the sample rate to every 5 secs

    8. Set the “Run As” user as a windows account which has both administrative privilages over the server and both the active node SQL Server and the monitoring SQL Server.

    9. Within the “log files” tab

    10. Change the “Log file type” drop down to “SQL Database”

    11. Using the “configure” button select a suitable dsn i.e. baseline_dsn

    12. Give the “log set” a suitable name i.e. baseline_server_availability_log

    13. Set the “Log set Size” to “maximum”

    14. Save the new counter

    If the values identified here look like SQL Server is the problem you can then start looking at query perfromance using the DMV's, or server side trace. Avoid profiler if this is a production server. And also limit your trace to certain events like SQL Batch Completed, Stored procedure competed.

    As you did not setup SQL Server you may find that things like AUTOSTATS is turned off on databases, or autoclose is set to true, there are a million things it could be but you will have to follow a logical path narrowing down the problem before you can get to a specific resolution.

    MCITP SQL 2005, MCSA SQL 2012

  • try to capture a trace and find problem querys, for starting the ones with great IO (read principal). SQL 2008 is not more performance "per se", but have a lot of tools to help you when you find the problem.

  • Check this link[/url].

  • In addition to the above, I would take a look at the waitstats. When a query runs, it will at some point be waiting for someting - disk io, cpu, lock's to be released, etc. You can find out how much time your processes are spending waiting for various resources by analysing the waitstats over a period of time. I've been using a script for a few years now which will help you here - you can find it here... http://gallery.technet.microsoft.com/scriptcenter/8c90bd2e-9def-4f44-bce4-e5dae4d86f71.

  • webtekkie (8/31/2011)


    In addition to the above, I would take a look at the waitstats. When a query runs, it will at some point be waiting for someting - disk io, cpu, lock's to be released, etc. You can find out how much time your processes are spending waiting for various resources by analysing the waitstats over a period of time. I've been using a script for a few years now which will help you here - you can find it here... http://gallery.technet.microsoft.com/scriptcenter/8c90bd2e-9def-4f44-bce4-e5dae4d86f71.

    Great tip! Thank you. 🙂

  • Is your question about Analysis Services performance or relational engine performance? Hard for me to tell from your post. If the former, this is the wrong forum. If the latter, did you manually run update statistics with full scan on every single index/statistic on the new serve? Mandatory item there post-upgrade.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Ignacio A. Salom Rangel (8/31/2011)


    Check this link[/url].

    that one was more useful 🙂

  • Arun Jebakumar (9/1/2011)


    Ignacio A. Salom Rangel (8/31/2011)


    Check this link[/url].

    that one was more useful 🙂

    I'm glad it helped 🙂

  • It's a tricky one when you have all on one server - gets even more muddy when you're generating stuff from Sharepoint.

    I'd take the original post to be a question of hardware/version differences with the cube as being an illustration of a point - it's tricky to know where to post such a question.

    I'm doubtful many of the suggestions will help very much, I've got some perfmon counter data that was put out by the SQL Cat team on my web site have a look here for counters which include SSRS and SSAS http://www.grumpyolddba.co.uk/monitoring/monitoring.htm

    As I said I worked for several clients who found that upgrades to x64 didn't quite do as expected. The most usual starting issue is not allocating memory correctly. Process explorer ( from microsoft ) may help you here. Also cacheset will allow you to see if the system cache has got out of control.

    Comparing systems is sometimes very frustrating and making significant hardware changes can make it very frustrating.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply