Small database with 1 million users

  • Now let me just first say that i dont have a clue what im talking about here.

    But why not modifiy the app to go against a host of servers instead. If its an archived database (so just reading static data). Get 100 (can probably do with less) cheap workstations with 2+GB memory and have the app chose server base on... maybe current time (ms part).

    Would be easy to scale if nr of users grows. Automatic redudacy, the app can just switch to the next server in line (or have a second pool of backup servers)

    /T

  • tommyh (10/17/2012)


    Now let me just first say that i dont have a clue what im talking about here.

    But why not modifiy the app to go against a host of servers instead. If its an archived database (so just reading static data). Get 100 (can probably do with less) cheap workstations with 2+GB memory and have the app chose server base on... maybe current time (ms part).

    Would be easy to scale if nr of users grows. Automatic redudacy, the app can just switch to the next server in line (or have a second pool of backup servers)

    /T

    tommyh this will be nightmare for administrators and DBAs

    DB size is small 1GB so it fit all in RAM, so there will be no IO bottleneck. The biggest problem is concurrent users, how many users will access data in the same time? From 300 users to 1 million, this sound really strange, maybe someone made a mistake. Foxconn have 1,2 million employees but not all of then need access reports. If this if for web app you should rewrite app for that purpose. But even then you don't have so many db users. You can increase concurrency loading data only in batch in off peak hours so users will not be blocked. If this for reports purposes you should have almost only parameterized queries that causing reusing of execution plans.

  • Sqlsavy (10/15/2012)


    Hi all,

    We've a small database (1 GB) and SQL 2005 server is clustered one. Now Business is forecasting that database will be accessed by million users by early next year.

    Currently there are only 300 users using the database .

    How to plan the resources for such a massive requirement?

    Any help would be appreciated

    Thanks in advance

    Cheers

    Are you talking about 1 million actual SQL Server login accounts, or would this be more like 1 million users of a website that can potentially make an execution request?

    If this database is the backend for a website, then all you need is one service account, pooling setup for X connections, and content caching at the application level. It's possible to scale from 300 users of an application to 1 million users with no increase in database requests at all.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Sqlsavy (10/16/2012)


    Thanks Kevin.

    I ‘d certainly update you on the progress of this project we might end up having MS DSE assisting us .

    Here is one more requirement from the Project which is bit tricky.

    The existing database Stored Procedure hit rate during peak hours is 20,000 .

    New Mobile application will be introduced on the same database and Procedure hit might increase to 40,000 and 50,000 /hour form 20,000.

    Would you be able to give any advise on what will be the impact on Database/Memory/CPU?

    50K sproc hits per hour is NOTHING for SQL Server to handle assuming the sproc isn't a complex mess, is properly tuned and the hardware is up to snuff. I have clients doing a few thousand executions per second on very modest hardware without issue. A properly tuned proc that isn't accessing lots of data rows/pages or returning lots of data to the client can scale to amazing numbers of executions per second. I know of environments with tens of thousands of executions per second on SQL Server. It truly is impressive what the sql server engine can do (especially given what I have seen clients try to do to it!!!). :w00t:

    But again, the key is you simply MUST do things right to get to scale: proper design, optimal data types, optimal database/sqlserver/winserver/networking/etc configurations, good tuning. A few things that are often overlooked in application concurrency/scalability efforts is a) ONLY do on SQL Server those things that MUST be done there (the sql server is a single silo of resources, and once you start exhausting those resources things can go south in a hurry) and b) CACHE STUFF that you can in the middle tier to avoid unnecessary hits to the database server.

    Hope this helps.

    P.S. I am a LOT cheaper than Microsoft Premier Field Engineers! 😎

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • This information may be dated, but I remember reading somewhere that every connection to an instance of SQL Server requires 32K of RAM.

    1,000,000 connections x 32,000 Bytes = 32,000,000,000 Bytes of RAM.

    For this one reason, my best guestimate is that you are not going to be able to support this connection load on your server with the one-connection-per-client paradigm.

  • Apologies if this was mentioned, I skimmed a bit after the first few posts...

    To add to the list of clarifications for us to help you, is this 300 -> 1 million user shift going to be concurrent connections to the FRONT END, or to the DATABASE?

    Over a few hundred connections resembles a web page connection. Any item external to your network shouldn't be directly hitting your database, you should at the least have exposed N-Tier components, preferably a secured website for that. The website can pool connections etc etc, so you're not looking at a million connections to the database tier.

    You need to go to the people making this request and get an expected architecture for the structure of the access, why they expect to have such an incredible boost in access, where said access is coming from, is it concurrent access or overall volume increase, etc.

    That much of a jump is just plainly outrageous for anyone but an Amazon/Google/Facebook to expect when switching from a beta-test group to public usage, and those types of firms already have these types of questions already answered in-house as to how they expect to handle usage overloads.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

Viewing 6 posts - 16 through 20 (of 20 total)

You must be logged in to reply to this topic. Login to reply