DB Autogrowth

  • Hi Experts,

    Which option is best Filegrowth by 10% or in Megabytes for a large database that grows with average 40GB a week?

    Thanks

    TY

  • tmmutsetse - Tuesday, August 7, 2018 7:44 AM

    Hi Experts,

    Which option is best Filegrowth by 10% or in Megabytes for a large database that grows with average 40GB a week?

    Thanks

    TY

    Ideally you are managing the growth of the database instead of relying on auto growth to add space when needed.  Auto growth should be for unexpected growth needs.  When it is used, you should use a fixed size for the growth.  Using 10% will increase the space in ever increasing sizes as it is based on the current size of the database not the original size of the database.

  • Percentage growth is always a terrible idea, and usually leads to a surprising growth coupled with a shortage of disk space and that overwhelming feeling that you really are going to have to do a shrinkdb now.

    Set it to a sensible growth in MB, not too big but also not too small. And then set up processes to monitor your database sizes so you never have surprising increases anyway.

  • Thank you.if my DB grows with 40gb a week  what is a sensible growth in MB should i set?

    Thanks in Advance

  • Again, as others have said, you want to avoid auto-growing at all costs due to the performance hit.  Without knowing your environment or application, I would set the initial size of your database to account for all of the expected growth based on the total amount of space you have on the data drive without filling it or triggering any monitoring alarms.  For example, if you have a 500 Gb drive which only houses the data file for that database, you could go ahead and set the initial size to 400 Gb (409600 Mb) which would give you 10 weeks of growth.

  • In addition to the other advice, also ensure that Instant File Initialization is enabled (unless your environment prevents this) Google if you're not sure what it is. (The service account will need Perform Volume Maintenance rights.)

  • tmmutsetse - Wednesday, August 8, 2018 2:24 AM

    Thank you.if my DB grows with 40gb a week  what is a sensible growth in MB should i set?

    Thanks in Advance

    40gb?

    But seriously, manage the growth yourself instead of relying on the autogrow as Lynn already stated. Personally, I'd add, let's say, 200gb to the database. That should let it grow for four weeks. Set the autogrow to something reasonable like 20gb, to give yourself a little padding in case something bad goes on. Then, monitor the database space, and add another 200gb as needed. Better to do one big chunk occasionally rather than lots and lots of tiny chunks. That leads to serious problems too.

    40gb/week of growth every single week, is quite a lot. You better be investing in a very big SAN or lots of drives right now. You need a terabyte every 6 months. You'll be at 4tb in two years. This assumes you don't grow faster. That's a big database. You have a number of interesting problems coming up, not just autogrowth. Backups, restores, index maintenance, possible needs for partitioning for data management... I'd start studying how people manage large databases.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Thanks Grant,

    Yes 40GB is a lot but i am creating some partitions.I always archive the data the delete the partitions.I am still new to the Organization and i am trying to find  a better way to manage this .

    Many thanks

    T.

  • tmmutsetse - Wednesday, August 8, 2018 2:24 AM

    Thank you.if my DB grows with 40gb a week  what is a sensible growth in MB should i set?

    Thanks in Advance

    Add 2 terabyte to an existing file over the weekend, you will be good for a year.

  • NewBornDBA2017 - Wednesday, August 8, 2018 12:28 PM

    tmmutsetse - Wednesday, August 8, 2018 2:24 AM

    Thank you.if my DB grows with 40gb a week  what is a sensible growth in MB should i set?

    Thanks in Advance

    Add 2 terabyte to an existing file over the weekend, you will be good for a year.

    Not a good idea if you ever need to do a restore to a lesser system.

    As for all the worries about auto-growth causing a performance hit, if you have "Instant File Initialization" enabled, auto-growth of data files just doesn't cause a hit large enough to be called a "performance issue".  Heh... and if you rebuild just one large index on the system in the full recovery model, you won't need to manage auto-growth because the rebuild will cause about 120% the size of the index will be added to the MDF if it's not already available.

    Log file?  Different issue.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Is this a warehousing database that grows at the 40GB or a OLTP databases maybe a database strictly for reports?  The way the db is used should also be part of the equation as to how you grow it.  Ideally you grow the DB as few times as possible to decrease the fragmentation and VLF's,  you can grow it monthly with the likes of 200GB that gives you the control and know exactly how much you are growing by over percentages and just 12 growth spurts a year... LOL.  Just my .02 🙂

    Good Luck
    D

    DHeath

  • Yeah, can't say I've ever had an issue with auto growth for data files, as long as you don't set it to something silly like 1MB increments and then every actual growth fires off a whole slew of small file increases. Do make sure things are set up to allow IFI though because that removes pretty much all the potential overhead of data file growth. And even if you have auto growth on, you should still be monitoring your disk usage both so you know how long you've got before you'll need more capacity and so that you can identify sudden spikes where a change in process etc has made significant changes in your storage requirements that you may need to get ahead of.

  • The idea, really, is that you should control the growth of the database instead of letting it grow whenever it needs to grow.  And I would stay away from % growth for data and log files.

  • Jeff Moden - Wednesday, August 8, 2018 2:58 PM

    NewBornDBA2017 - Wednesday, August 8, 2018 12:28 PM

    tmmutsetse - Wednesday, August 8, 2018 2:24 AM

    Thank you.if my DB grows with 40gb a week  what is a sensible growth in MB should i set?

    Thanks in Advance

    Add 2 terabyte to an existing file over the weekend, you will be good for a year.

    Not a good idea if you ever need to do a restore to a lesser system.

    As for all the worries about auto-growth causing a performance hit, if you have "Instant File Initialization" enabled, auto-growth of data files just doesn't cause a hit large enough to be called a "performance issue".  Heh... and if you rebuild just one large index on the system in the full recovery model, you won't need to manage auto-growth because the rebuild will cause about 120% the size of the index will be added to the MDF if it's not already available.

    Log file?  Different issue.

    Its not a good idea because of a restore to a lesser system? Yes sir I agree with you on that but it also depends on how frequently he refreshes Dev/Test environment from prod. I guess changing the autogrowth setting to 40 gb or 80 gb or whatever instead of using 10 % should help which everyone has already talked about. I wonder if he has a job which insert millions of millions of records into a table so changing the job timing to run off hours should help which shouldn't impact users. Just saying.

  • NewBornDBA2017 - Thursday, August 9, 2018 9:09 AM

    Jeff Moden - Wednesday, August 8, 2018 2:58 PM

    NewBornDBA2017 - Wednesday, August 8, 2018 12:28 PM

    tmmutsetse - Wednesday, August 8, 2018 2:24 AM

    Thank you.if my DB grows with 40gb a week  what is a sensible growth in MB should i set?

    Thanks in Advance

    Add 2 terabyte to an existing file over the weekend, you will be good for a year.

    Not a good idea if you ever need to do a restore to a lesser system.

    As for all the worries about auto-growth causing a performance hit, if you have "Instant File Initialization" enabled, auto-growth of data files just doesn't cause a hit large enough to be called a "performance issue".  Heh... and if you rebuild just one large index on the system in the full recovery model, you won't need to manage auto-growth because the rebuild will cause about 120% the size of the index will be added to the MDF if it's not already available.

    Log file?  Different issue.

    Its not a good idea because of a restore to a lesser system? Yes sir I agree with you on that but it also depends on how frequently he refreshes Dev/Test environment from prod. I guess changing the autogrowth setting to 40 gb or 80 gb or whatever instead of using 10 % should help which everyone has already talked about. I wonder if he has a job which insert millions of millions of records into a table so changing the job timing to run off hours should help which shouldn't impact users. Just saying.

    No, frequency of restores and autogrowth settings doesn't matter at all during restores.  If you have, for example, a Terabyte of unused or unallocated space in your production database, the restore will build that same amount of the freespace in the lesser environment.  If there isn't enough room for that, as discovered by the prechecks that SQL Server does, it will simply not allow the restore.  And don't forget that if the poo hits the fan in a DR situation and you have to sacrifice some other lesser machine to act as your production box, it may not fit and then the fan also turns to poo. 😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply