What is better.. Static or dynamic memory allocation ?

  • I have gotten mixed comments on this topic. I have a 64 bit machine running 64 windows 2003 standard and 64 SQL 2005 standard with 8 GB of RAM. We want to upgrade it to 32 GB. What is the best approach to do this? Dynamic or Stattic giving min and max server memory a value ? and if static what value should I use for 32 GB knowing that this box is only being used for SQL.

  • check this out:

    64 bit max memory setting for SQL 2005 in SQL Server

    Alex S
  • Here is my cent.

    Static or dynamic setting depends on applications on your server. If there is more than one application on your server and SQL Server is more important than anyone else, you may consider to reserve a minimum memory for SQL Server.

  • I've seen some consultants recommend 28-29GB if this is only a SQL Server box. You need to run some metrics and be sure that it won't cause issues. 64-bit should handle the overhead for Windows better.

  • I have run tests for a couple of production boxes. In all but situation I ended up finding that leaving memory allocation dynamic worked better. I think it will really depend a lot on the hardware, the type of load on the server, and any other applications that the server runs.

    You really need to do your own testing for your situation.

  • I am one of those consultants who recommends 28-29GB to start on a dedicated 32GB sql server box. Then monitor for paging. I also heard from somewhere (can't recall tho darn it) that min should be set a bit below max for some internal memory management reasons. Oh, and get to SQL Server 2005 SP2+ as soon as you can to avoid giving away WAY too much memory to the procedure cache!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • I too set them up at those values and *always* specify the max memory BTW.


    * Noel

  • I am one of those people who have said that min memory should always be set a little bit below max memory. This is repeating feedback from PSS who said that setting min and max memory to the same value disables some internal SQL Server memory management. Even having 200 MB between min and max allows SQL to manage its internal memory better.

    Max memory should alway be set for 64-bit servers. Otherwise SQL adds the page file size to the physical memory size and tries to use all of it. As the page file is on disk, performance can suffer...

    There is a separate debate about setting min memory to zero or to a specific value. Personally, I see no reason to leave it at zero if you know that SQL will have access to the max memory setting. If SQL cannot use up to the max memory setting, then maybe you have set max memory too high.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • we've had a number of discussions about setting memory for 64bit sql server - all I'll say is that you MUST set a maximum memory size for sql to use - just work on the same basis as you did with 32 bit and awe. I've usually set memory on 64bit machines around 4gb under total memory installed, but nearly all my 64 machines have 64gb of ram. Note that system cache is either 512mb or 1gb depending on setting, don't starve the o/s and other apps. Don'e have a dynamic page file either.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Colin,

    I had heard that the 64-bit OS has less overhead than the 32-bit one because no AWE/PAE paging space needed for paging. Is that true?

    Does the 64-bit OS need 4GB all the time? Meaning if you have 8GB on the server, give the OS 4gb? Or does it slide up as you add more memory?

  • This is definetly a good discussion. Again mixed feeling, I wish somebody from MS developing team give us a light.

  • the problem is that I've not used sql 64 bit standard, only enterprise. There are config settings to w2k3 which affect system cache, this is either 512mb or 1gb, obviously significant memory. The big problem I find is that typically 64bit systems exist with fc/san storage and by the time you've added these drivers, the system monitoring software, Tivoli and/or dell open manage/hp bits/ ibm stuff you find a whole mass of stuff running, anti virus, the usual suspects!! Loads of services too. I found initially that if you didn't set memory then sql server saw the page file as physical memory too - there may be problems within the service packs too - I had some problems with memory allocation in sql 2005 and there are/were some w2k3 issues too. I usually suggest w2k3 with sp2 at least and 3186 rollup to sql server. Other than the lock pages in memory I found many issues with 64bit if you didn't fix maximum memory. As for free memory, well mem to leave or out of process memory is still something to consider, you just don't get hemmed in by all that 2gb limit, I believe multi page allocations still occur outside of allocated memory to sql server so watching all those things is good practice. I don't know all the answers, I wish i did, but I do see 64bit isn't quite the answer to all problems.

    It's also difficult to test as typically you don't get an 8 way 64gb 64bit server to play with - and even if you do figuring out how to test what happens when you make changes is tricky!

    I have 64bit test servers at home but again it's difficult to reproduce work/production type problems.

    Basically I still think much as I did with sql 6.x, I don't trust dynamic anything, after all on a dedicated sql server what else would want the memory anyway?

    don't suppose that this helps too much really?

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Steve Jones - Editor (1/1/2008)


    Colin,

    I had heard that the 64-bit OS has less overhead than the 32-bit one because no AWE/PAE paging space needed for paging. Is that true?

    That was also my rough understanding. Since the 64-bit OS can natively access up to 2TB of memory, there's no paging involved (for AWE purpose of anythinglike it). 4GB was the 32-bit limit for addressable.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • it's not that so much as the fact that many of the other internal memory pool sizes have been increased. I can't find the kb articles just now, but I'll look again, I think I'll put them together onto my web site so i don't keep losing them! the awe paging used to take 1Gb I believe. What I was trying to point out is that when you allocate say 28Gb to sql server don't always assume that all the sql server processes/operations will be from this memory, many still take memory outside of this.

    windows itself can also allocate memory.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • I found initially that if you didn't set memory then sql server saw the page file as physical memory too

    As far as i know, SQL doesnt utilise much page file in most situation. It'll keep swapping in/out pages from memory if its under pressure.

    So, does this mean SQL is able to view page/swap file under certain circumstances and utilise it? I'm very curious on this topic.

    Simon Liew
    Microsoft Certified Master: SQL Server 2008

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply