Disk config

  • Hello

    I'm getting a new server 🙂 It has 5 x 73GB hard drives

    I originally asked for it to have 1 RAID1 (73GB) for windows and LDFs, and 1 RAID5 (146GB) for MDFs. But my boss is pushing me to change this and make one single huge RAID5 using all 5 disks and make the 3 partitions necessary.

    He claims this will equal the performance and security of the configuration I'm asking, and saysthat my propose to have a 57GB partition for LDFs is a waste of space, plus the waste of the disk necessary for the RAID1 setup.

    Also he says the RAID1 could not be hotswapped in case of disaster, while RAID5 can be. I don't this and don't know what to respond here, the server is a Dell 2800 series.

    What do you think about this? What pros and cons have?

    My boss is right? (no boss is always right 🙂

    Thanks

  • Your boss is right that RAID1 cannot be hotswapped unless is hardware implemented. Economically speaking, your boss is also correct to make one single RAID5 because RAID1 is very expensive in that for each drive you need a mirrored.

  • Thanks for the reply

    The server has it's onw RAID controller (PERC4ei), this means the hotswap can be done for RAID1.. Right?

    Could you explain me a bit how the RAID5 can be hotswapped while the RAID1 can't?

  • RAID 1 can certainly be hotswapped, if it is a hardware implentation (which in your situation it is) - if you have only got the five disks then really you should set it up as RAID 5 across 4 disks and leaving the 5th as a hot spare, such that in case of a failure the drive is immediatly rebuilt - otherwise you will be running in a very dangerous way whilst you are waiting for the 5th drive to be RMA'd - even if you have DELL's 4 hour response with a new drive, that is still 4hours where potentially your RAID 5 could completely fail.  Of course with the 2x4 split backplane of the 2800 you could do a two disk RAID 1 on the one channel and RAID 5 on the other channel to give you better throughput - you would still need the hot spare on the RAID5 and technically of course could do with a hot spare for the RAID1 - but then I guess it really depends on your budget and uptime requirements!

  • With so few disks there's not a real way to configure an effective setup.

    Much as I hate raid 5 I'd have to agree with your boss, but only as you have so few spindles.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • You really need to look at your needs and what you have to work with.  I have a RAID5 for my data and a RAID1 for my OS and a RAID5 for my log files.  But I have plenty to work with... 3 Powervaults.  How big is the data you are working with and how much do you plan for the data to grow.  The best solution would be to have RAID5 for your data and have a seperate RAID5 for your logs, but I beleive you need atleast 3 drives to create the RAID5.  If you don't have the resources to build 2 RAID5 and 1 RAID1, 1 for data and 1 for log files and keep your OS on a seperate drive, then I suggest you build 1 mirrored drive for your OS and 1 RAID5 for your data and logs.

  • Your boss is wrong on every point.  If this server will have OLAP databases, a good strategy would be to add a sixth drive, use a RAID 1 pair for executables and tran logs, and create a RAID 10 device with the other four drives for the data files.  Perhaps you can spend the same amount by using 36GB drives rather than 73GB drives (definitely use 15Krpm drives). 



    --Jonathan

  • Will you have a seperat drive(s) for the OS?  or is the OS going to be on one of the 5 drives?  Is the OS mirrored?  How big is your Data that you will be putting on there?  What kind of server is it?  File, SQL, both or other?  It will all depend on your needs and size of data you are working with.  How big the sql db's are if any and/or how big the other files will be as well.  It does sound like you will want to go with a RAID5 not because it is the best but you don't have much to work with and it will give you the most space.

    Best Scenario is:

    1 - Mirrored drive for OS AND

    IF SQL server

    1- RAID5 for *.mdf files if SQL server

    1 - RAID5 for *.ldf files

    Else

    1 RAID5 (if File server or other than DB server)

     

     

  • Unfortunately I can't buy any more disks (budget), and the server is already here (I can't change the HW specs anymore). With such quantity of disks I think a RAID10+spare won't be a better option than RAID1 + RAID5 (3 "wasted" disks, my boss would kill me)

    Disks are 73GB 10K RPM Ultra320. The (only) SCSI/RAID controller is an integrated PERC4ei controller and a 1x8 Hot Plug backplane, wich BTW don't know what's the difference between 1x8 and 2x4.

    The server will be busy in terms of many DBs with medium activity, not a single heavy loaded one.

    The server will be entriely for SQL and Analysis Services server, will be running DTS (mainly for data warehousing)

    Could you tell me something about the difference between the 1x8 and 2x4 backplanes?

  • Given these constraints, you will most likely have the best compromise of performance, disaster recoverablity, and capacity by using your idea of a RAID 1 pair for data and executables and a three-drive RAID 5 for the data.  RAID 1 is the best performing choice with log files, as striping doesn't help sequential writes.

    Attempting to optimize capacity on a SQL server using 73GB drives guarantees that you will be creating an i/o bottleneck.  If you actually have 100GB of data (three of these drives in a RAID 5 provides 146GB), the correct number of drives to use is in the dozens.

    You also need not worry about single vs. split backplanes.  It would take more than twenty 10Krpm drives to saturate an Ultra320 SCSI channel with random i/o.  A 1x8 backplane means you can plug up to eight drives into a single SCSI channel; a 2x4 backplane allows you to plug up to four drives into each of two channels.  If you had two intelligent array controllers with battery-backed cache, you could get slightly better performance with a split backplane by configuring one controller for 100% write caching and using that with the log file array, and configuring the other controller for, say, 50% read/50% write cache and using that with the data file array. 



    --Jonathan

  • Yeah, he's back again!!!

     

    ...throwing in something pseudo-thread relevant...

    Have a look here for RAID5: http://www.baarf.com/

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Thanks, Frank.    I hadn't seen that Baarf site, but I just signed up as a member.

    Now that it's nearly impossible to find drives with less than 36GB, many SQL servers are incredibly inefficient.  If they're not disk-bound from configuring the number of drives based on database size, they're often hijacked to do double-duty as file or application servers once someone discovers all the excess space. 



    --Jonathan

  • Yes, it's an interesting link, I first noticed in a posting by Gert Drapers.

    Jonathan, is there a general recommendation as what is considered "best" block size? 4k? 8k? Or something else?

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • As you put the word in quotes, you know that the correct answer is "it depends."  64K is usually a much better starting point than 4K, though.  Just be aware that, if you shrink your files (or use autogrowth ), you'll need a third-party disk defrag utility unless you're using Windows Server 2003. 



    --Jonathan

  • Thanks! I was looking for 64 in your answer.

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

Viewing 15 posts - 1 through 15 (of 22 total)

You must be logged in to reply to this topic. Login to reply