installation location for mdf, ldf, backup files

  • Bah; many defragmentation products, both OS/filesystem level and SAN level, will defrag database files which are in use, while they're still being used.

    NOTE: the very act of defragging uses up significant amounts of your disk I/O; I don't recommend running it during your busy periods, or periods of time when you have tight SLA's.

    Personally, I like Piriform's Defraggler; it does a fairly good job, allows you to select only certain files (the registry hives, or perhaps that one .LDF/.MDF that was shrunk and grown and shrunk and grown until it's in thousands of fragments), or the whole disk.

    More importantly, the "Stop" functionality works very well; start it at night, stop it in the morning, restart the next night, repeat until you're in a good state.

  • yeah but thats just whats make my wonder..

    if a defrag tool defragmenst my mdf file from lets say 10 pieces, back to 1 piece

    then i wonder, how that affects the internal of my mdf file,

    Won't my sql datapages get really fragmented as a result of this "file" defrag action.?

  • Sjeef (2/16/2011)


    Won't my sql datapages get really fragmented as a result of this "file" defrag action.?

    Logical defragmentation is not the same thing as physical defragmentation. SQL Server doesn't have a way of telling you if there is physical fragmentation on the disk, but can be adversely affected by physical disk fragmentation. Usually datafile fragmentation is created by Auto Shrink / Auto Grow or continuous manual shrinking of database files in conjunction with the Auto Grow (or manual growth) of a database.

    As far as your data pages go... Pages are logical chunks of information grouped together in extents. These are not physical structures, just virtual representations of your data and how the data is ordered. Logical fragmentation usually involves gaps in the pages from Inserts, Updates, and Deletes. Pages are stored within the physical data file, which cannot be reordered by a disk drive defrag process.

    Hard drives spread files out across physical disk sectors, which makes data files in uncontiguous sectors physically fragmented. But a data file can be on 50 contiguous sectors and the database can still be fragmented (logically) because of the way the indexes have been "scattered" across extents and data pages.

    The reverse is not usually true, though. Chances are, if your hard drive is fragmented physically, then your database is fragmented logically (this is not a 100% universal truth, though). The way to check for database fragmentation (both before and after hard drive defragmentation) is to run DBCC Showcontig (this is a deprecated feature) or sys.dm_db_index_physical_stats (this is the recommended new method).

    Here's a good article on database fragmentation: http://www.sql-server-performance.com/articles/per/detect_fragmentation_sql2000_sql2005_p1.aspx

    And another article on physical fragmentation from MSSQLTips: http://www.mssqltips.com/tip.asp?tip=1481

    I hope that helps.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I should point out that some people don't recommend defraggin SAN drives (for some very interesting reasons). Check out this link before doing any defragging:

    http://www.las-solanas.com/storage_virtualization/san_volume_defragmentation.php

    EDIT: Sjeff, I just realized you're not the OP on this thread and you've derailed it for a different question. In the future, please post new questions as new threads instead of hijacking someone else's thread.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • These are rule of thumb recommendations I followed from a SQL Server analyst we hired to help with a very large high performance database we implemented. OS should be on it's own drive. For performance, make sure the OS swap file, pagefile.sys, is at least 1.5x the size of memory in the machine. System databases, master, model, and msdb could be on OS drive as they are small and not used much. But they should be backed up regularly or you could loose config info. tempdb should be split out with .mdf on a drive and .ldf on a separate drive. Then for each user database, you need to split out .mdf and .ldf on separate drives. Backup should be written out to another large drive, but you can have one large drive for all databases to backup to. As far as sizing goes, all databases, including tempdb, should be pre-allocated space (set initial size) to at least the expected 2 year growth size. This pre-allocates space in large contiguous chunks on disk to minimize file fragmentation and autogrow lag. Autogrow should be set to grow in distinct chunks and not by percentage. Growth size should be 20%, so for a 50GB .mdf file, auto grow should be set to 10GB. Likewise with the .ldf file, with a minimum of 1GB. tempdb size should be pre-allocated to about 10% the size of all user databases combined. This one might be a bit extreme. Hopefully you keep your transactions smaller than that and use simple recovery, do full backups, or backup your log files often. For all databases, including tempdb, the .ldf log file pre-allocated size should be 10% to 20% of the .mdf data file. Finally, the physical disk size should be double the pre-allocated database file size, or at least large enough that you won't have to deal with growth concerns for a couple years. I also reorg the indexes weekly.

Viewing 5 posts - 16 through 19 (of 19 total)

You must be logged in to reply to this topic. Login to reply