Initial databases size and database autogrowth

  • Hi,

    We have SQL Server 2005 EE x64 with SP3. We have Share point databases and I do not have any idea on growth pattern. So to get start with the initial database size and auto growth values, does the below numbers makes sense?

    Initial data file size = 500 MB

    Initial log file size = 100 MB

    and auto growth for both data & log files is set to "In megabytes by 50"

    Please advice

    thanks

  • If there is any indication that the database will steadily grow (which i would assume a sharepoint database does, but have no actual experience) i would set the autogrow size a bit higher just to avoid too much fragmentation.

    Probably would be better if someone familiar with sharepoint behaviour, dropped a line too though 🙂

  • Sharepoint database likely holds commonly shared documentations. It will grow over time, although the growth rate is depending on the size and number of documents to be deposited into the database.

    You can certainly allocate more space to the data file(s) to avoid unnecessary autogrowth that will impose performance impact and potential fragmentation. Even if autogrowth is enabled, the best practice is to proactively allocate additional space by database management personnel. And this becomes more important when the database is getting bigger.

  • I would set it to a % of the size of the database. But it depends on your environment.

    I've got about 60 servers with over 100 DBs and little old me to keep them company. The sharepoint that we are currently using is around 40 GB and is only a couple years old. If it grew at only 50 MB increments it would be tripping all over itself. I dont' have the time to dedicate to growing it manually.

    50 MB may be good for right now, but if it gets much bigger I would automate the growth by %, provided you have plenty of disk space.

    Twitter: @SQLBalls
    Blog: http://www.SQLBalls.com

  • snu155 (1/14/2010)


    I would set it to a % of the size of the database. But it depends on your environment.

    I'd strongly recommend against that because when the DB is small, large amounts of fragmentation will occur and when it is big, it will grow by too much.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Setting the autogrowth to % can have a severe impact on your log file size. Imagine the following scenario: Your database is in full recovery mode and is heavely used. For whatever reason the database isn't backed up for a while and you don't notice it. Your log file size would grow enourmously in no time and soon or later......**BANG**

    This are the principles I apply if I have to setup a new database:

    Define the initial size big enough to deal with the anticipated additional data volume over a defined period without need for auto grow (ok, I know you don't have a pattern for that). If it needs to auto growth, define the growth in MB and choose a value which doesn't fragment your file system too much. The minimum autogrowth value I use is 64MB or mulitipe of this value (128, 256 etc.).

    Typically I choose to have a bigger log file then the data file just to be on the save side (--> Backup!!) and also to prevent fragmentation because of too many extensions being allocated. If it proves to be too big you can shrink it at a later stage.

    My motto: "Disk space is cheaper to get and to support compared to the time if have to invest fixing all the stuff because we wanted to save some bucks"

    Hope this helps.

    René:-D

  • Hey Guys,

    If you are backing up the db nightly, and backing up log files every 15 minutes (except during your backup window), and re-indexing once a week based off of fragmentation, would you still say do not set it to a % of growth?

    I always try to preface my advice by "it depends on your environment."

    If you have a Sharepoint DB where you store user files that in theory could be up to 2 GB (probably not, but that is the varchar(max) or varbinary setting), would you still have your growth limited to 64MB?

    Not trying to be picky, just seriously curious, as I'm always looking to refine my requirements based off of solid analysis.

    I did the % because of the size of the user base, the size that files can differ by, and by the fact that I automate as much as possible, because I am understaffed on the DBA front.

    Whenever the disk autogrows fragmentation occurs, but if you could limit the number of times it auto grows by %, and re-index on a weekly basis, would that not cover you?

    Twitter: @SQLBalls
    Blog: http://www.SQLBalls.com

  • A couple of general points to take into consideration...

    (1) MS recommends that you dont have any SP content database over 100GB, so if any of your content databases are over 100GB, its time to split the sites into multiple content databases.

    (2) If you are using Mirroring for DR, MS recommends no more than 10 content db's be mirrored in high-safety mode due to latency issues (we have Std edition for SP, but not an issue for you with Ent Edition).

    Having said that, SP should be creating the content databases, not the DBA, and I would recommend sizing to 1yrs growth, to avoid OS file fragmentation, and make sure you have t-log backups in place to prevent log file bloat. If you don't have any idea on the growth pattern (as you stated) but have an idea you will be in the GB range of data, I would size the mdf to 50GB, with 5GB growth, and the ldf to 5GB, with 10% growth, then monitor the utilization.

  • snu155 - fragmentation as caused by file growth is OS file level fragmentation, not data fragmentation (as resolved by re-indexing). SQL Server is susceptible to file level fragmentation just as any other application is.

  • Simon Facer (1/17/2010)


    MS recommends that you dont have any SP content database over 100GB

    I'd really like to see the link for that, please.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply