Keeping user database in memory

  • Michael Valentine Jones (1/28/2009)


    Although you cannot absolutely force data into memory, you can do things that ensure the vast majority is there, like run queries against all tables that require table scans. Probably running a full backup just before your morning application restart would be the simplest method to get the entire database into memory. You can monitor the situation with performance monitor to see how many database page reads you get during the load; if it stays at or near zero, you are good.

    I also have concerns about this application design. Although you have a controlled set of requirements, things have a way of changing, so good luck:

    Boss: The company that just bought us loves your application and wants to start using it. The new data will total around 300 GB. How many days will it take you to get this going?

    You: But, you promised the application would never have to deal with that volume of data!

    Boss: Guess I was wrong. By the way, I told new CIO you would have this live by the end of next week.

    First of all, the application in only restarted once a week since that is some kind of policy at this particular customer (other customer never restart the application except in case of HW failure or upgrades). However, the customer is still worried, because when the application should crash (or some parts of the customers infrastructure), then want to be able to be up and running again in about 1 hour (this is for the actual clients). With the current setting we barely make this (achieving times of around 60 minutes). Of course the customer wants a better performance, just in case...

    The idea of running a full backup might be over the top, but the idea of running queries against the two slowest tables might be a good idea. The problem here is that it is hard to predict when the application is going to be restarted (this is controlled by the customer, and the customer's infrastructure). Anyway, it's an option to further investigate... What would be THE cheapest query I can run on a table to get the desired effect?

    For the growth: the customer is a port operator. They don't just add a new dock... It takes about 10 years of planning and another 5 for building it. If they want to use the same application at another port, that should be no problem since it would be set-up on a whole new infrastructure. (and yes, it is already one of the largests ports in the world. Only implementing it without adaptation for the 4 largests ports would be an issue.)

  • LCAD (1/28/2009)


    What would be THE cheapest query I can run on a table to get the desired effect?

    SELECT * FROM TableName

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Going a long way back to one of LCAD's initial posts:

    LCAD (1/22/2009)


    2 Gb is indeed tiny, but this only the operational data (history now is 3 GB and will continue growing 🙂 )

    Is the historical data stored in the same tables as the operational data?

    If it is, then your startup will continue to slow down as your history grows.

    (Without indexes and partitioning, SQL server will table scan all 5GB of data, not just the 2GB you want.)

  • Tim Wilson-Brown (1/29/2009)


    Going a long way back to one of LCAD's initial posts:

    Is the historical data stored in the same tables as the operational data?

    If it is, then your startup will continue to slow down as your history grows.

    (Without indexes and partitioning, SQL server will table scan all 5GB of data, not just the 2GB you want.)

    The historical data is being stored on the same DBMS, but in a different DB. For these tables there should be only write actions (for now at least; the customer wants to run reports on this data which we fear will wreak havoc on the performance).

  • I have to be honest I've never read so much or seen such wasted time over an issue that can be resolved with the addition of memory! In sheer terms of wasted time and effort anyone want to calculate the ROI vs buying 8Gb ram ?

    --

    I'd just like to make a point here though - I think you are very very wrong about a backup putting the db into memory - for a large database that would flush the buffer cache e.g. 1TB db with 32GB ram; I have to admit I'd never thought about a backup this way - but I'm sure this statement is incorrect.

    Finally - select * from table ?? When we get to suggesting this as an optimisation tip I really think we should give up and go home

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • colin Leversuch-Roberts (1/30/2009)


    Finally - select * from table ?? When we get to suggesting this as an optimisation tip I really think we should give up and go home

    Well, the OP seems to think that warming SQL's cache is the only way to fix his performance problems, and I'm tired of arguing, so...

    I can only beat my head against a brick wall for so long.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • colin Leversuch-Roberts (1/30/2009)


    I have to be honest I've never read so much or seen such wasted time over an issue that can be resolved with the addition of memory! In sheer terms of wasted time and effort anyone want to calculate the ROI vs buying 8Gb ram ?

    --

    I'd just like to make a point here though - I think you are very very wrong about a backup putting the db into memory - for a large database that would flush the buffer cache e.g. 1TB db with 32GB ram; I have to admit I'd never thought about a backup this way - but I'm sure this statement is incorrect.

    Finally - select * from table ?? When we get to suggesting this as an optimisation tip I really think we should give up and go home

    First: for at least it isn't all wasted time. I'm in a stupid situation were I have a customer which isn't going along with the 'A DBMS also needs some memory' idea, and on the other hand wants super performance, even in the odd event off a start-up, where all tables need to be read. So all help is welcome to either:

    - find a magic setting to get the desired startup time, without the customer having to divert axtra resources to this DBMS

    - get strong arguments to tell the customer that adding X-amount of extra Ram or whatever resource will solve the problem

    Second: I agree that the DBMS does exactly that what it is supposed to do (although a bit more control would be nice). For me it is not surprising to observe this behavior, but the customer isn't going along with this logic.

    Third: all post taken into account I can summerize this as:

    - Application is loading all it's data at once, which isn't expected by the DBMS since this only happens once a week

    - DBMS won't support pinning of data so that the loading can profit from having all this data in memory

    - Adding more memory would be a good idea (supposing that more of the to-be-read data would remain in cache)

    - using backups or select * isn't a good way to boost performance

    Final conslusion: this way of interaction/scenarion isn't supported by the DBMS (or at least not with optimal performance). The best way to impove the situation is by adding more RAM, which is under control of the rather unwilling client (would by the way has just decided to add 72Gb RAM for the Applicaiton clients since there would be a few more users who only need to look at the data from time to time :w00t: ). So unless the customer decides to do something about the physical machine, I'm stuck!

  • OP ? In theory doing a select * may seem a solution, however having observed memory usage from many angles, including having 64GB of ram for 20GB database, a good sort or poor join will flush the db cache to use the cache for tempdb, so the chances are the basic select will work fine until the first sort.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Do a test... build a RAM DISK and put the database on it. You'll soon find out that putting the database in memory can't make up for an improperly written app or code. Oh sure... it may help performance a tiny bit.. but nothing like fixing the application code would do.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • colin Leversuch-Roberts (2/1/2009)


    OP ?

    Original Poster

    In theory doing a select * may seem a solution, however having observed memory usage from many angles, including having 64GB of ram for 20GB database, a good sort or poor join will flush the db cache to use the cache for tempdb, so the chances are the basic select will work fine until the first sort.

    I'm not suggesting it's a permanent solution. He asked what the simplest query that will 'warm' the cache after a restart is.

    Since the app only reads once from the DB and then caches on the app side, there probably won't be many sorts or joins,

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I am going to say this one last time, I promise. FIX THE APP!


    * Noel

  • noeld (2/2/2009)


    I am going to say this one last time, I promise. FIX THE APP!

    Heh... and I'll keep on saying it... FIX THE APP! 😛

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 12 posts - 31 through 41 (of 41 total)

You must be logged in to reply to this topic. Login to reply