constant high memory usage on SQL Server 2012 SP3

  • Hi,

    I have like 89% of server 20GB RAM used all the time.

    Is it normal usage? I mean it's the same even at night when nobody is working.

    Some intense operation on production DB shows many transactions waiting what slows down the application.

    Any way to limit RAM for SQL Server so the Windows got more?

    thanks

  • I give about 2gb to windows by setting the max memory setting. SQL is designed to use all available memory, and giving it as much as possible is a good thing. Licenses are expensive, memory is cheap.

  • If you go into SSMS, Server/Instance properties that's where you can set the minimum and maximum memory settings. If the minimum memory setting is ~89% of your memory that would explain why it's getting used.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • kc2ine (3/1/2016)


    Hi,

    I have like 89% of server 20GB RAM used all the time.

    Is it normal usage? I mean it's the same even at night when nobody is working.

    Some intense operation on production DB shows many transactions waiting what slows down the application.

    Any way to limit RAM for SQL Server so the Windows got more?

    thanks

    Let me ask you: do you want SQL Server to give it's memory back to the OS when it isn't actively reading data? If so, then it will have to read it back into memory when it is needed again, and rotating disks are THOUSANDS of times slower than RAM is!

    I note that most clients I come across do NOT set a maximum memory limit on SQL Server, which is a MUST! There is no hard-and-fast rule though.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Hello,

    this 89% it can be normal, but it can be allocated, you can check how much is free .. something like below

    First is Free Memory,Target and Total SQL Memory ... and second is Total memory on Server

    SELECT object_name

    ,counter_name

    ,cntr_value / 1024 "MB"

    FROM sys.dm_os_performance_counters

    WHERE counter_name IN (

    'Free Memory (KB)'

    ,'Total Server Memory (KB)'

    ,'Target Server Memory (KB)'

    )

    --FOR SQL 2012>=

    SELECT physical_memory_kb/1024 "MB"

    FROM sys.dm_os_sys_info

    --FOR SQL 2008<=

    SELECT physical_memory_in_bytes/ 1048576

    FROM sys.dm_os_sys_info

    I think that you can have problem with query design. There can be lot of scan of tables and also app can be slow because first reading directly from disk, because are not in memory , and if you will select something next, the old data will be flushed...

    We have 172GB and for SQL was about 140GB and every time was full, after lot of optimizing queries, I have now about 100GB.

    But I have about 15Databases with totally size about 350-400GB ,there is lot of historical data and like you know this is selected sometimes..

    How much size is your database/databases?

    And which type of wait is there? Cannot be problem also with wrong size of tempDB? I had also this problem, that we forgot set size of tempDB, and after restart was resetting... and if you have default setting this is long time for "auto repair" with 10% autogrowth of 1MB..



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

  • kc2ine (3/1/2016)


    I have like 89% of server 20GB RAM used all the time.

    Is it normal usage?

    Perfectly normal

    Any way to limit RAM for SQL Server so the Windows got more?

    Set max server memory lower. Bear in mind that idle memory benefits no one. Having 50% or more memory idle and unused is not going to make SQL faster, probably the opposite.

    Have a read through chapter 4 of https://www.red-gate.com/community/books/accidental-dba

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Just remember that whatever memory you give to SQL Server, it's going to use all of it. It will load everything into cache and keep it there as long as it can. That's how it's designed and that's how you want it to run. Reading from memory is faster than reading from disk. But, please, follow Gail's advice, set a max memory value.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • tony28 (3/2/2016)


    Hello,

    this 89% it can be normal, but it can be allocated, you can check how much is free .. something like below

    First is Free Memory,Target and Total SQL Memory ... and second is Total memory on Server

    SELECT object_name

    ,counter_name

    ,cntr_value / 1024 "MB"

    FROM sys.dm_os_performance_counters

    WHERE counter_name IN (

    'Free Memory (KB)'

    ,'Total Server Memory (KB)'

    ,'Target Server Memory (KB)'

    )

    --FOR SQL 2012>=

    SELECT physical_memory_kb/1024 "MB"

    FROM sys.dm_os_sys_info

    --FOR SQL 2008<=

    SELECT physical_memory_in_bytes/ 1048576

    FROM sys.dm_os_sys_info

    I think that you can have problem with query design. There can be lot of scan of tables and also app can be slow because first reading directly from disk, because are not in memory , and if you will select something next, the old data will be flushed...

    We have 172GB and for SQL was about 140GB and every time was full, after lot of optimizing queries, I have now about 100GB.

    But I have about 15Databases with totally size about 350-400GB ,there is lot of historical data and like you know this is selected sometimes..

    How much size is your database/databases?

    And which type of wait is there? Cannot be problem also with wrong size of tempDB? I had also this problem, that we forgot set size of tempDB, and after restart was resetting... and if you have default setting this is long time for "auto repair" with 10% autogrowth of 1MB..

    thanks Tony, will try to check these tomorrow.

    We don't have a lot of databases like 12 maybe. The biggest is like 7GB.

    BTW how much free disk space is recommended? currently lwe have 130GB free.

    We definitely didn't set size of the temDB so I have to look at that.

    p.s.

    thanks all for tips, appreciate it.

  • I think that free space on disk depends on your requirments and trend of your database... you have to be ready if problem will come with no space on disk.

    and for temp database... I dont remember where I read and cannot find this article... but is good divide for more files depends on your cpu..

    for example I have 8data files for temp and one log file...I had some bottleneck with one file, and after read lot of topics I found very good solution...rather keep some rules from SQL..



    How to post data/code on a forum to get the best help: Option 1[/url] / Option 2[/url]

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply