Data placement strategy for a huge database in tera bytes

  • Hi,

    We are installing a database which is expected to grow at the rate of 100GB/Month and extend to a size of a few Tera bytes. The DBMS is SQL Server 2000 on Windows 2000. Can you guys advise / suggest on the best strategy for creating such a database. Please consider database clustering and other options, which can give the best performance and easy maintenance.

       The database is used for capturing certain online data. More of an OLTP.

    PLease advise at the earliest possible.

    Thanks and Regards,

    Arun V

     

     

  • Well, clustering won't assist with backups or growth - it's purely a failover method to keep your server hardware available ( note the shared disks still represent a single point of failure )

    Data partitoning would be the way to go - probably onto filegroups using partitoned tables or maybe by archiving data out into databases.

    I understand that a SAN can do snapshot backups ( currently mine doesn't ), however as is always the case whatever, large amounts of data take some time to backup - I'd always attempt to backup to disk first and then to tape, there are single tapes that will hold a terabyte or so.

    You might also want to consider sqllitespeed which compresses backups as it backs up and does the whole thing quicker, we're just about to put this in to solve our backup issues.

    Differential backups will be quicker and smaller but rely upon a good starting backup and take longer to re-apply. Most SQL Books include chapters on backups. Forr your disk arrays for backups use as many spindles as you can and use raid 10 or raid 0 ( raid 5 puts a heavy overhead on writes ), raid 0 only if you're going to backup to tape immediately or copy the backup somewhere else immediately.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • If you expect your database to grow at that speed, I would spent some money on a professional consultant with proven experience in VLDB design. I wouldn't rely on online communities as they don't know your environment.

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Grow of 100 Gb a month ????

    What are you trying to do? - copying the complete internet?

  • I've heard a rumor that Microsoft is creating a search engine to compete with google.....

  • Colin..thanks a lot for the inputs...I want to know more on SQL Litespeed. Can you please send me a link on that one...!!

    Also please let us know why raid 5 is not adviced...actually we were about to take that approach !

    Thanks and Regards

    Arun

  • http://www.dbassociatesit.com/

    I've only worked with a dw upto 3/4 terabyte of storage, I had that arranged in 7 x raid 10 arrays.

    Raid 5 has an added overhead for the parity disk write - Inside sql server 7 or 2000 or the ms press sql server admins manual have lots of info on disks and storage. There's also a couple of ms books on sql server 2k high availability that may help.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply