Configure tempdb on new drives

  • Still new to the DBA world, I could use some advice on configuring tempdb.

    We will be adding a new array of 14 x 300 GB drives to the 14 x 73 GB existing array.

    The plan is (or was) to configure the 14 x 300 drives into 3 RAID5 partitions, move all the data and logs there, and then configure the 14 x 73 GB drives into 3 RAID10 partitions and move the logs back onto those.

    Ending up with (if I did the math right):

    RAID5 (Data) -- |900 GB|900 GB|1500 GB|

    RAID10 (Logs) -- |146 GB|146 GB|219 GB|

    This is not set in stone.

    After watching how this server is used, it's a heavy processing box, I would move 4 large databases (about 350 GB total) to the 1500 partition, tempdb onto a 900 GB and the rest (about 150 GB) onto the last. The logs would go onto their respective RAID10 drives.

    After researching a little more, I read a best practice of creating one tempdb data file for each processor. This server has 4 processors and 12 GB of RAM. So now I'm wondering if it would be better to have 3 tempdb data files (2 of which would share space with user databases) or create 2 RAID5 partitions (more spindles) and just seperate the user databases from one tempdb data file. Then, in either case, is just one tempdb log okay?

    I haven't shrunk tempdb in awhile and it's currently at about 44 GB. This is a production box so I don't have the luxury of stress tests.

    Any thoughts and advice are welcome.

    Thank you.

  • If there is an option I will prefer tempdb with raid 10 and see the reason to create more than 1 file in the following article...

    http://support.microsoft.com/kb/328551

    http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/tempdb_capacity_planning_index.mspx

    http://blogs.msdn.com/sqlcat/archive/2005/11/21/495440.aspx

     

    MohammedU
    Microsoft SQL Server MVP

  • Thanks for the links, Mohammed.

    Would there be more contention or does it just not make sense to put the tempdb.mdf and templog.ldf on the same RAID 10 partition? I'm still struggling with the idea of having multiple tempdb data files on the same drive as the user databases. 

    I guess it's simple enough to change, if the first attempt is less than optimal.

  • Regan;

    I agree with you.  I believe, given limited resources, it is better to isolate 1 tempdb on its own drives rather than have multiple tempdbs sharing spindles with data files.

    Also, I would not make any of the drive arrays Raid5.  Assuming normal OLTP database, and even just for data files, write performance is horrible compared to Raid10 or Raid1.  Raid5 is only acceptable for less frequently updated databases or database partitions (archived data, database copies dedicated for reporting, and maybe some less frequently updated OLAP databases).

    Hope this helps



    Mark

  • Yes, that does help.

    Although I hate loosing the space, I'm thinking of using 4 of the 300 GB drives for a RAID10 on put tempdb there. The templog will be on another RAID10 across a different controller.

    Looking at I/O Read Bytes and I/O Write Bytes in Task Manager, I'm seeing a lot more reads than writes. So with my concerns for more space, the user databases will probably go on RAID5.

    Thanks for your suggestions, Mark.

  • While I've rarely had performance issues with RAID 5, there are definitely good arguments to use RAID 10 instead if you can afford the space.

    If possible, try to get an idea of relative use of the system between databases and then maybe balance out the spread. In other words, if 3 dbs rarely have updates, I might drop the logs for them on the same R10 array as data and leave the other db's logs on its own R10 array. You want to try and keep those heads of the drives (for logs) from too much seek back and forth.

  • Thanks for the reply, Steve. I think I have a pretty fair idea of how to set this up. Now if only we could get a few minutes downtime to plug them in...

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply