Estimation of mdf and ldf files in SQL Server 2005

  • Hi,

    Its an interview question : If my data size is 100 gb how will I estimate the mdf and ldf file while creating the database? What would be the size of mdf and ldf while creating database of 100 gb data ?

    Is there any thumb rule for this question ?

    Thanks in advance.

  • 75gb ldf ;25gb mdf

    log file is 3/4 of total gb

    @vjy@

  • Hi,

    There are few things you have to decide before creating the database, if the log is not that much important for you then you can put your DB in simple recovery model and the you can assign near to 5 GB (restricted growth) to .ldf & assign the default size to .mdf 90 to 95 GB then after that put 10% auto growth. in case if you want to point in time recovery then you have to think about the recovery model, this should be full recovery. then keep your .ldf 5 GB ( 10 % auto growth ) , then schedule the regular log backup(eveny 15 minutes), then daily full backup, this will release the log space. also you have to think about few cases like index frag and etc. if you create a database with default size of 100GB this make sure that this 100GB sapce will be available only to that your database, other resources will not use this allocated space in the drive. aso we have to consider about how many transactions are happening, is there any heay work for tempdb in the server based your database. these things also ome to picture.

    Reagrds

    Raju

  • I don't know of any rule of thumb but the first thing to consider is that if the real data size is 100G then the mdf will need to be able to accomodate 100G + overhead + Indexes + objects. You will also need to consider the growth of that data.

    The log file size depends primarily upon data throughput versus log backup schedule. In Full recovery a frequent log backup will keep your ldf to a managable size.

    For a large database a fixed size autogrowth setting is probably better than a percentage setting. It is also better to schedule growing your database instead of letting it happen on its own.

    --

    JimFive

  • With the information provided the only possible answer is "it depends". For example, if this 100Gb of data hardly ever changes, you'll not need a very big log file and the MDF can be not a lot larger than 100Gb. If your 100Gb of data increases by 10Gb a day, on the other hand, you'll require a lot of transaction log space (depending on your backup frequency, of course) and you would want to size the MDF to be able to accept a lot of data before requiring manual or automatic resizing.

  • James and Paul have noted the correct response. You cannot estimate log file size, any rule of thumb is a wild guess based on experience, but could be completely inappropriate. ultimately you have to measure the workload over time to get the total log size needed, but the file size will depend on the backup schedule of log backups. More frequent log backups = less log file size (though total log transactions and backup size is consistent across time).

    For data, if the 100GB includes indexes, then I'd ask about growth. If you know that, I'd put the data file at 3-4 months growth. Otherwise, you're guessing here as well.

  • Thanks for replies guys.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply