RAID and Its impact on your SQL performance

  • btd (5/1/2012)


    I do agree a well put together document BUT I wish we could get away from the interpretation of RAID. 'inexpensive discs'... The 'I' certainly did not mean inexpensive discs when RAID first came out way back when...'Independent' ... why does the computer world keep changing things when it does not need to, we have plenty of change without it......

    I agree with you at some point about these changes, we don't know who established the "Independent"/"Inexpensive" notation, but it should be something that stays standard forever.

    and as for 'tables' instead of files.........

    This has to do with the flat file/relational models, being different models they have different names:

    Flat--->Relational

    File--->Table

    Record--->Row

    Field--->Column

    "El" Jerry.

    "El" Jerry.

    "A watt of Ottawa" - Gerardo Galvan

    To better understand your help request, please follow these best practices.[/url]

  • jfogel (5/1/2012)


    I don't see ever getting on board with having a production database running on a VM. Of course, things change but I cringe at the thought of this.

    You're missing out. Modern hypervisors have a tiny percentage of penalty. Even a 1vm/1physical machine makes sense in many cases, especially for DR.

    There are lots of SQL Servers in most environments that run fine on VMs. Not necessarily the most loaded, but any that are averaging < 50% CPU with rare peaks, should be virtualized. The SQLServerCentral servers (clustered) are both virtualized. I have friends at various companies, including a large media company with the vast majority of their SQL Servers on VMs. There are a few heavily used ones on physical hosts, but it's rare.

  • jfogel (5/1/2012)


    I don't see ever getting on board with having a production database running on a VM. Of course, things change but I cringe at the thought of this.

    As long as you're careful with watching load, it works very well, unless you have very high peak performance requirements. It also makes hardware and SQL version upgrades very easy to do side-by-side, without buying additional hardware for swapping.

    In the spirit of this article, however, if you replace dedicated 20 15k RPM disk RAID 10 data RAIDsets plus dedicated 6 15k RPM disk RAID 5 log disks with a single 10 7.2k RPM disk RAID 5 RAIDset that's in use by 15 other VM's at the same time, yes, you're fairly likely to see some very significant performance degradation. This is obvious; you dropped your max IOPS capability as well as your max throughput capability, _and_ you're sharing it to multiple machines (just like the same SAN spindles used for multiple LUNs, or using your SQL data drive as a file share).

  • On RAID. I've heard inexpensive going back to the early 90's. Remember that back in the day 8x100 cost less than 1x800.

  • Gregory, great article

    But I have a question about read performance for RAID 5 and RAID 10. In your table you showed that RAID 5 scores Good and RAID 10 scores Excellent for read. On the other hand, Kendal Van Dyke in his tests http://www.kendalvandyke.com/2009/02/disk-performance-hands-on-part-5-raid.html, shows that RAID 5 always outperforms RAID 10. I know that Kendal’s article is 3 years old but could you please tell me why you score RAID 10 higher than RAID 5 on read performance?

  • I haven't read all of the replies to this article yet, but I appreciated it's detail to the RAID configurations, their penalties, and the calculation for the IOPS.

    The only other thing (Which would be too hard to narrow down - I understand) would be the array controller. BBU? No BBU? Write Cache configuration, memory on the controller. Levels of RAID supported, etc...

  • Steve Jones - SSC Editor (5/1/2012)


    jfogel (5/1/2012)


    I don't see ever getting on board with having a production database running on a VM. Of course, things change but I cringe at the thought of this.

    You're missing out. Modern hypervisors have a tiny percentage of penalty. Even a 1vm/1physical machine makes sense in many cases, especially for DR.

    There are lots of SQL Servers in most environments that run fine on VMs. Not necessarily the most loaded, but any that are averaging < 50% CPU with rare peaks, should be virtualized. The SQLServerCentral servers (clustered) are both virtualized. I have friends at various companies, including a large media company with the vast majority of their SQL Servers on VMs. There are a few heavily used ones on physical hosts, but it's rare.

    I do use VMs so it isn't all lost on me but when it comes to a prod machine I still favor a box with a single task. I don't get much of a budget though and this tends to restrict my comfort level when it comes to stepping outside of what I'm comfortable with.

    Cheers

  • jfogel (5/1/2012)


    I don't see ever getting on board with having a production database running on a VM. Of course, things change but I cringe at the thought of this.

    It can work quite well, under certain circumstances. The key is to make sure the hardware is capable of handling all the strain put on it. Of course it does not make sense to virtualize if the machine is barely capable of handling the wordload for SQL Server itself, but that is not always the case on powerful machines with seldom used databases.

    ---
    Timothy A Wiseman
    SQL Blog: http://timothyawiseman.wordpress.com/

  • I haven't read all of the replies to this article yet, but I appreciated it's detail to the RAID configurations, their penalties, and the calculation for the IOPS.

    The only other thing (Which would be too hard to narrow down - I understand) would be the array controller. BBU? No BBU? Write Cache configuration, memory on the controller. Levels of RAID supported, etc...

    Good question...

    Typically, you want BBU Cache and as much of it as you can get. Caching of WRITES reduces the Write Penalty you pay for any RAID Level.

    For a SQL Server install, I've most often favored configuring the RAID Controller for 100% WRITE Cache as SQL Server itself is going to cache reads anyway......

    hope this helps.

    GAJ

    Gregory A Jackson MBA, CSM

  • Thanks to everyone for all the great responses.

    this turned out to be a very informative thread....

    Cheers,

    GAJ

    Gregory A Jackson MBA, CSM

  • There is of course another dimension to consider after you have created your LUNs and exposed them to your

    SQL Box, which is the Partition alignment and File Allocation Unit (Cluster) size to be used in formatting the drive.

    Have a look at the following documentation on SQL Server 2008, and SQL for some insight: http://msdn.microsoft.com/en-us/library/dd758814.

    A Cluster size of 64KB is normally fine for use with SQL Server.

  • Where would you store the tempDB Logs in the described "SQL Storage Recommendations"?

    On the same Raid as the tempDB data or on the Logs Raid?

  • I would store it on the logs RAID.

  • btd (5/1/2012)


    I do agree a well put together document BUT I wish we could get away from the interpretation of RAID. 'inexpensive discs'... The 'I' certainly did not mean inexpensive discs when RAID first came out way back when...'Independent' ...

    When I was working IBM mainframes in the early 80's the 3350 and 3380 DASD of the time was rather expensive. Various manufacturers produced arrays of cheaper SCSI disks (from the UNIX environment) as IBM DASD, much to the annoyance of IBM sales. RAID, in the sense of redundancy, subsequently came to these arrays

  • I understand the concept of requiring fast write response for SQL log writes.

    However, what is the performance case for fast data writes; provided you have enough buffers does the lazy writer really require fast writes?

    Also, what is the performance case for fast backup writes, provided you can finish a backup before you need to start the next one?

Viewing 15 posts - 31 through 45 (of 95 total)

You must be logged in to reply to this topic. Login to reply