DB Limitations in sql server

  • Whats the limitations of data storage in sqlserver DB. How will be the perforamcne if i have database which will get a data of 400 GB per year and all the data should be there in the table the whole year and then it can be archived.

    Thanks.

  • My suggestion is a big honkin server!

    Performance is dependent on multiple factors:

    CPU, Memory, disk speed, transactions, indexes, how data is added, number of users, LAN/WAN speed, the front-end application, etc.

    You really need to do research at

    http://msdn.microsoft.com/sql/" and http://www.microsoft.com/sql/default.mspx.

    I would also suggest looking at clustering. PM me your e-mail and I'll send you the write up I did on clustering. Note that you need WinXX enterprise edition as well as the SQL Server Enterprise Edition at minimum.



    ----------------
    Jim P.

    A little bit of this and a little byte of that can cause bloatware.

  • Clustering does not improve data performance, though, I thought. I was under the impression it's a logical choice for high-availabilty, not necessarily performance tuning.

     

    If you've got 400 Gig, then you're going to want to be properly indexed, and have that data on a good sized RAID with enough spindles that I/O is not going to be affected that tremendously.

     

    We have a 50 GB database, sitting on an 8-drive RAID 10 (4 effective physical drives, acting as 1 logical drive).

    Grabbing data from any of the 10 million row + tables takes at best a second or two if you're seeking on an indexed field.

    Hope this helps.

     



    David W. Clary
    MCSD
    Data Miner 49er
    Sr. Database Administrator, Ceiva Logic

  • I agree:

    Clustering does not improve data performance, though, I thought. I was under the impression it's a logical choice for high-availabilty, not necessarily performance tuning.

    But if he is looking at a 400GB database he probably has plenty of users actively attaching to the DB. The clustering should spread the load across multiple servers. It doesn't provide true load-balancing, you need to buy additional software for that, but instead of one front-end server trying to support 200 connections, you have 2 or 3 or more that then connect to the DB.

    That is why I advised him to do more research. Whether the DB is an OLTP, reporting or a Data Warehouse makes a difference in server needs. How does the data come into the DB? Is it Bulk Copy, Store and Batch, individual transactions or a combo makes a difference.

    What type of data is he storing? If it is images and CAD/CAM of an aircraft assembly does he really want to store the data in the DB or just links to the files on the server.

    Is it the Accounting DB?

    Does he expect only 25 users of the DB doing large queries or is it 500 users doing row-level queries?

    Does all 400GB have to be up and live or can he just keep the last 3 weeks live and roll the rest into an archive DB that is used for reporting?

    These are questions that he needs to ask/answer when designing the server/setup.

    Trust me, I've had to beat on these more than once . And then justify to mangaement .

    Just my $0.02.



    ----------------
    Jim P.

    A little bit of this and a little byte of that can cause bloatware.

  • True. When you're dealing with something that size, you need to spend quite a bit more time researching than a quick post to a messageboard.

     

    Pre-production planning is crucial for something that size.

     

     



    David W. Clary
    MCSD
    Data Miner 49er
    Sr. Database Administrator, Ceiva Logic

  • MEMORY and STORAGE are good candidates on where to start. if the system is going to be used as OLAP then 64 bit may be also considered. Without doubt a SAN is a must for such big yearly increase.

    and please DO YOUR HOMEWORK or you will be sorry 

     


    * Noel

  • This might be a big enough application of SQL Server that Microsoft would be interested in supplying you with consultancy, in exchange for being able to use you for marketing purposes. Microsoft are extremely keen on demonstrating to the world at large that SQL Server can handle giant applications, the kind that would in the past have been automatically done with mainframes or Oracle.

    Drop them a line, it can't hurt

  • I doubt if Microsoft would be interested in giving free consultancy for a 400GB database, as this is well within SQL Server's capabilities.  However, if you are struggling with how to set up such a database, there are many consultancies, including MS, you can hire to give expert advice, and I would suggest you do this.

    A few years ago I was working with a DB2 mainframe site where they described a new application as "it is small, only 2 TB".  For them, 2 Terabytes was small.  When 2TB is considered small for a SQL Server database, Microsoft will have cracked the 2001 mainframe market.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Heck, I'll throw out the "O" word here.

    You might even want to look at Oracle 10G grid computing. It may be a better solution depending on what you are doing.

    We're just coming off our last Oracle database. A little harder to setup initially, but once it came up we ran for almost 3 years with very little maintenance being required. We hit 99.999% of daytime uptime requirements. And those we didn't were not Oracle's fault (power outages, other network outages).

    Just a thought.



    ----------------
    Jim P.

    A little bit of this and a little byte of that can cause bloatware.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply