Best Practice for : SSD Hard

  • I planed for new server in this config :

    1- drive c: Tempdb

    2- drive d: MDF

    3- drive e :LDF

    4- drive f : Local Backup

    (All from a SAN )

    ---------------------------------------------

    I can use a limited ssd hard for my servers .

    Which is best practice for me to use ssd ?

    mdf or ldf or temp db ?

  • Just wondering where are the OS and SQL Server installed?

  • OS and sql server (system databases and temp db) all are on drive c .

  • MotivateMan1394 (6/1/2015)


    Which is best practice for me to use ssd ?

    mdf or ldf or temp db ?

    Use it for whatever your system is hitting harder on random I/O.

    Typically it can be tempdb, but a problematic filegroup can be even a better fit for SSDs.

    -- Gianluca Sartori

  • You mean drive C.

    Another thing : these servers are for an Online site. (Big Online Store)

    But Is this type of Hard , Reliable?

    and what about their Longevity and Strength in order to user for drive c or another, In comparition to typical san storage hards?

  • Tempdb on C drive is really a bad idea.

    C drive is best formatted with 4Kb allocation units, while database files are best stored on 64Kb formatted partitions.

    As far as reliability is concerned, make sure you use enterprise grade disks and use the appropriate RAID level (RAID1 typically).

    -- Gianluca Sartori

  • spaghettidba (6/1/2015)


    Use it for whatever your system is hitting harder on random I/O.

    Typically it can be tempdb, but a problematic filegroup can be even a better fit for SSDs.

    Yup, this exactly.

    There's no hard rule for where SSD helps the most, see which files are getting the heaviest IOs or highest waits and look at moving those to the SSD

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • The problem was two !!!

    These Two things worried me.

    1-

    Tempdb on C drive is really a bad idea

    It means I should consider a seperate drive for Tempdb in 64K formating .

    2- I should monitor my server in diferent way.

    Which is the best way to diagnose the most busy file : mdf, ldf or tempdb?

  • Depending on how you are setup, i.e.; number of databases and files - the SQL Activity monitor gives decent but coarse IO numbers.

    At the database level, the standard report 'Object Execution Statistics' is useful down to the procedure level and may help decide which files to put where

    If more interested at the disk level, then the SysInternals DiskMon app is very good

    hth

    Tom in Sacramento For better, quicker answers on T-SQL questions, click on the following...http://www.sqlservercentral.com/articles/Best+Practices/61537/

  • I would also add that you should be careful if you're going to use the SSD for a volume that does a lot of random writes and might be close to capacity, since you could end up running into the so-called "write cliff".

    It's not so common, since vendors typically do a decent job of safeguarding against it with their drives, but I've seen it happen a couple times where tempdb was moved to a couple SSDs, and for budgetary reasons the drives were sized a bit small (and probably were not designed with enough overprovisioning).

    IO performance was usually better than when tempdb had been on the spinning disks, but occasionally latencies would start to be measured in whole seconds, which is obviously undesirable to say the least.

    You can read a bit more about that here[/url] and here[/url].

    You'll probably not have to worry about it, but it's good to be aware of the possibility.

    Cheers!

  • I would suggest to use more drives for mdf depending on database size.

    You've said that it is big online store... so database might be bigger than 100GB or 200GB

    So if you split up the datafile you even possibly split up the IO stream which might brings performance to a website.

    Like to share upcoming advices because I might run into a similar server config.

  • Thank you all

    I should Read these more carefully.

    Thank you again

  • Depending on how you are setup, i.e.; number of databases and files - the SQL Activity monitor gives decent but coarse IO numbers.

    At the database level, the standard report 'Object Execution Statistics' is useful down to the procedure level and may help decide which files to put where

    If more interested at the disk level, then the SysInternals DiskMon app is very good

    1- I use Activity Monitor. I shows all files and their loads. But Can I get report For example 10 Days from this tools?

    2- I get report From --database level, the standard report 'Object Execution Statistics'-- I find 3 Query That Have Huge Read,Write And Cpu Usage.

    - Can I Get Report that Which Disk has The Most Pressure From run This queries? (which disk or which files Tempdb or mdf or ldf)

    3-I run DiskMon. I want to run this for 2 or 3 Hours. After That I didnt find any report From This app?

    Thank you

  • If you want to get an IO profile of your workload, one thing you could do is set up a job to pull the information from sys.dm_io_virtual_file_stats on a regular interval (I usually default to every 5 minutes and adjust the interval downwards if I need a more granular breakdown).

    The stats there are cumulative, so the basic approach would be to grab the information from that view at the defined interval, and then compare to that at the end of the interval.

    Then you'll be able to analyze that data as you see fit. For example, you can tell which files have the highest overall reads, overall writes, the highest peak reads, peak writes, etc. That information can be helpful in determining which files to put on what tiers of storage.

    Just be careful to collect the information long enough to have a decently representative sample of your workload. If you monitor for a few days, but miss out on a weekly report that hits one file especially hard, that could lead to mistaken distribution of the files.

    A good write-up about doing this sort of thing with virtual file stats is this article: http://sqlperformance.com/2013/10/t-sql-queries/io-latency.

    I hope this helps!

    Cheers!

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply