Basic Question: Do you have to restart the instance if you change database or log file growth

  • Jeff Moden - Thursday, November 22, 2018 1:56 PM

    bdkdavid - Wednesday, November 21, 2018 5:37 PM

    Hi Jeff and Sue,
    Yes the 13 TB file is the mdf file. I just started working with the database yesterday. The last batch had growned the data base from 11.5 TB to 13 TB which was on Monday. I have not had time to go into further analysis at this time. I am planning on adding a few ndf file to the system and migrating certain tables to them. Then shrink the database file. Maybe you guy's can evaluate my plan. First I have to talk with the programmers with the application. They will help me pick out certain large tables. that I will create a separate ndf file for each table in question.

    I will move those tables into those selected files and shrink the database file mdf.
    I am toying with either creating them in the same file group Primary or separting them in there own individual file group.
    I only want those tables in that particular file in question. I believe only the separate file groups would be an answer to that. This is a rough draft. Do you have any particular recommendations!

    Did I mention this is SQL Server 2014 running in 2012 compatibility mode:
    It is not instant file initialization enabled

    there are several problems:
    1. Approaching max file size both lun NetApp and database
    2. database file backups are taking longer than expected usually 15 hours now it is 24 hours (FULL)
    3. batches are runing during backup in process running slowly
    4. log file backup fails several times throughout the day
    5. logfile maxs out at 2TB several times. Stops Database:
    6. ran shrink log file had to place database in simple first:ran dbcc cmd: returned to full
    7. it is not idea but it was a quick fix.

    Before you get into all that, let's find out a very important point. 

    You say the last batch caused the file to grow so let's talk about the nature of your batches vs the rest of your file.  Is most of the rest of your file static?  In other words, once you load a batch and maybe make an update or two, do the rows for that batch become totally static after a month or two?

    I'm still looking for answers to my questions above.  The reason is to be able to solve the problems you're currently having including the possibility of over-running a LUN and the insane amount of time you spend backing up.  It'll also help with all the other problems even though you "only" have the Standard Edition running in the 2k12 compatibility mode.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Here is the result for the query, I had to type 

    schemaname tablename type_desc data_compression_desc rowcounts totalspacekb totalspacemb usedspacekb usedspacemb unusedspacekb unusedspacemb
    dbo table1 CLUSTERED NONE 806313384 174494344 170404.63 169368600 165399.02 5125744 21646393.98
    dbo table2 CLUSTERED NONE 467457873 419467224 409635.96 417114624 407338.5 2352600 52026064.5
    dbo table3 CLUSTERED NONE 117229400 486287968 474890.59 484038224 472693.58 2249744 60313302.42
    dbo table4 CLUSTERED NONE 92892280 381154216 372220.91 379228720 370340.55 1925496 47273936.45
    dbo table5 CLUSTERED NONE 113451550 467851336 456886.07 466475024 455542.02 1376312 58025874.98
    dbo table6 CLUSTERED NONE 263745352 235592312 230070.62 226984376 221664.43 8607936 29227374.57
    dbo table7 CLUSTERED NONE 345399168 73970008 72236.34 66000712 64453.82 7969296 9181797.18
    dbo table8 CLUSTERED NONE 93273696 69041712 67423.55 62667056 61198.3 6374656 8569015.7
    dbo table9 CLUSTERED NONE 272109888 57498632 56151.01 51696664 50485.02 5801968 7136843.98
    dbo table10 CLUSTERED NONE 69653344 51912168 50695.48 46588312 45496.4 5323856 6443524.6
    dbo table11 CLUSTERED NONE 138559024 295790048 288857.47 290939128 284120.24 4850920 36689635.76
    dbo table12 CLUSTERED NONE 151850700 315012864 307629.75 310445584 303169.52 4567280 39073438.48
    dbo table13 CLUSTERED NONE 271738808 186076144 181714.98 183063984 178773.42 3012160 23080744.58
    dbo table14 CLUSTERED NONE 806313384 276857848 270368.99 274038024 267615.26 2819824 34339615.74
    dbo table15 CLUSTERED NONE 154264540 316818552 309393.12 314825704 307446.98 1992848 39294872.02
    dbo table16 CLUSTERED NONE 66739904 60127848 58718.6 58288792 56922.65 1839056 7459058.35
    dbo table17 CLUSTERED NONE 31601576 22866056 22330.13 21085584 20591.39 1780472 2837665.61
    dbo table18 CLUSTERED NONE 85710816 18136632 17711.55 16431616 16046.5 1705016 2251032.5
    dbo table19 CLUSTERED NONE 11486022 94453280 92239.53 92789480 90614.73 1663800 11716045.27
    dbo table20 CLUSTERED NONE 70324416 14825672 14478.2 13335216 13022.67 1490456 1840186.33
    dbo table21 CLUSTERED NONE 294836003 14451552 140675.34 142716152 139371.24 1335400 17867072.76
    dbo table22 CLUSTERED NONE 36833820 10946840 10690.27 9629392 9403.7 1317448 1358951.3
    dbo table23 CLUSTERED NONE 119581500 327813272 320130.15 326550264 318896.74 1263008 40657762.26
    dbo table24 CLUSTERED NONE 200789184 43185248 42173.09 42035128 41049.93 1150120 5357106.07
    dbo table25 CLUSTERED NONE 293499848 124153744 121243.89 123033104 120149.52 1120640 15399068.48
    dbo table26 CLUSTERED NONE 263745352 222290112 217080.19 221284992 216098.63 1005120 27570165.38
    dbo table27 CLUSTERED NONE 23365088 16047376 15671.27 15065664 14712.56 981712 1991209.44
    dbo table28 CLUSTERED NONE 26672090 215982392 210920.3 215070712 210029.99 911680 26787769.01
    dbo table29 CLUSTERED NONE 200789184 68811840 67199.06 67927656 66335.6 884184 8535144.4
    dbo table30 CLUSTERED NONE 123613199 999514352 976088.23 998693032 975285.19 822320 123964008.8
    dbo table31 CLUSTERED NONE 23733954 192313432 187806.09 191520504 187031.74 792928 23852147.26
    dbo table32 CLUSTERED NONE 161732900 661402336 645900.72 660742136 645255.99 660200 82030036.01
    dbo table33 CLUSTERED NONE 518025 2830944 2764.59 2187920 2136.64 643024 351731.36
    dbo table34 CLUSTERED NONE 215085558 1737122960 1696409.14 1736501160 1695801.91 621800 215444568.1
    dbo table35 CLUSTERED NONE 173854200 60437528 59021.02 59832032 58429.72 605496 7496261.28
    dbo table36 CLUSTERED NONE 38410769 193428816 188895.33 192829816 188310.37 599000 2399091.63
    dbo table37 CLUSTERED NONE 172434920 92051688 89894.23 91476656 89332.67 575032 11417128.33
    dbo table38 NONCLUSTERED NONE 173427300 50732992 49543.94 50168680 48992.85 564312 6292631.15
    dbo table39 CLUSTERED NONE 110681568 48045920 46919.84 47495144 46381.98 550776 5959358.02
    dbo table40 CLUSTERED NONE 333260000 101636568 99254.46 101101720 98732.15 534848 12605838.85
    dbo table41 NONCLUSTERED NONE 85710816 20773464 20286.59 20288888 19813.37 484576 2576869.63
    dbo table42 NONCLUSTERED NONE 174894300 51135520 49937.03 50658680 49471.37 476840 6342468.63
    dbo table43 NONCLUSTERED NONE 467457873 175229200 171122.27 174753424 170657.64 475776 21732992.36
    dbo table44 NONCLUSTERED NONE 70324416 17039232 16639.88 16582720 16194.06 456512 2113709.94
    dbo table45 NONCLUSTERED NONE 175342200 50943512 49749.52 50496032 49312.53 447480 6318626.47
    dbo table46 NONCLUSTERED NONE 294831339 149404072 145902.41 148961960 145470.66 442112 18530038.34
    dbo table47 CLUSTERED NONE 339216552 91309160 89169.1 90888240 88757.05 420920 11324886.95
    dbo table48 NONCLUSTERED NONE 176607300 51348616 50145.13 50936760 49742.93 411856 6368834.07
    dbo table49 CLUSTERED NONE 173854200 30872424 30148.85 30473824 29759.59 398600 3829293.41
    dbo table50 CLUSTERED NONE 80196900 648392912 633196.2 647999432 632811.95 393480 80416302.05
  • Maybe if I ask a third time... 😉
    You say the last batch caused the file to grow so let's talk about the nature of your batches vs the rest of your file. Is most of the rest of your file static? In other words, once you load a batch and maybe make an update or two, do the rows for that batch become totally static after a month or two?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Hi Jeff, and frederico,

    This is functionally as a DW. 
    I do not do the batches. I was told they keep them no more than 75000 records.
    The batched are run with csv files mostly numbers, dates and nvarchar data.
    They load with bulk insert statements. 
    The Database is used for Data Analysis of the data.
    They load data 3 times per week between 30 and 350 GB of data.
    Yes, data is static. The do some kind of ETL. I have not had time to check that at this time.

    What i have down on this server to date.
    I created seven additional temp data files total 8
    I added two addtional data files to the primary file group. The data base stopped working.
    it ran out of space on the lun
    I intended to set to enable  instant file initialization. I have to do that latter this week
    The mdf  is still about 13 TB in size.

    There are so many problems here.
    What i am attempting to do is apply some best practices.
    What I would like to do is create several file groups and port all the data into those.
    spreading them between seveal files

    I thought about adding a couple more files the the primary filegroup
    and run DBCC SHRINKFILE ('database_name', EMPTYFILE);
    It should spread the file between the other files. I am not ssre that it will continue to use the mdf or not.
    that is another question!

    They are backing up all the databases together it takes about 24 hours plus used to take 15 hours
    I am attempting to separate this one from all the others.
    The log file needs to be shrink every week or so.
    The error is waiting for log file to be backed up.
    I am hoping that making the files smaller would increase the speed of the backup. 🙂

    the query that I used by frederico was unusedspacekb, it is a little out of order the first five I typed in.
    The others are in that sequence

    thank you Guy's,
    David

  • You're on the right track with separate files and file groups. 

    If the data is static after being loaded, then base your files and file groups so that...
    1.  You have one month of data per file.
    2.  You have only one file per filegroup.  The filegroup name should include YYYYMM as a part of the name for easy management.
    3.  Setup each table in each file with the proper constraint to support a partitioned view (NOT partitioned table)
    4.  Rebuild the indexes to 100% FILL FACTOR.  You no longer have to worry about page splits so 100% will help save space.
         You may have to rebuild the largest index on a temporary file group, shrink/rebuild all the other indexes, and then rebuild the largest index
         back onto the original file group.  That would keep you from blowing out the MDF file and that will prevent you from setting a whole lot of free space to READ_ONLY.
    5.  Set all filegroups but the current (which is not empty) and next month (which is empty) to READ_ONLY and back them up one final time. 
         You shouldn't have to back them up ever again.
    6.  Build your partitioned views across the tables in the various files so that they can be referenced as a single object whose name will not change.

    All of that has a bunch of different advantages as stated above but the biggest thing is that your backup durations will seriously decrease because you'll 
    only be backing up the current month (not empty) and the next month (because it's not read_only but is empty so not much action).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • After you've done that, automate the creation of a new file/filegroup for a new "next" month and the auto-magic rebuild of the partitioned view.

    Another cool part is that, if sometime in the future they want to start dropping off the old months, it'll be a cinch.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • on top of what Jeff said - I would strongly consider upgrading to SQL 2016 SP1 Standard (or 2017 Standard).
    with this upgrade you would get
    - table and index Partitioning
    - table compression
    - use of 24 cores instead of the 16 you can use now 
    - column store (Could be of interest here)

    You could probably check what would be the final benefits of compression on this db by setting up a Developer instance on another machine and copying a subset of data onto tables with compression enabled.

    What Jeff said would still apply except for the partitioned view aspect

  • To be honest, I actually prefer Paritioned Views over Partitioned Tables.  Try restoring just 3 months of a Partitioned Table to another server sometime and you'll
    see what I mean.

    Also, the nature of a partition can change just because the data becomes static in the older months.  That also means that the index requirements can change. 
    Try to put different indexes on different partitions to support that notion and still be able to do a SWITCH.  😉  The non-aligned indexes that you must use to do
    such a thing aren't aren't going to let you do SWITCH.  You might be able to use a filtered index but it will need to be maintained every time you roll and old partition
    into the world of "static" data.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Thank Guy's 

    This is going to take me some time to implement. I will let everyone know what happens.
    Thanks,
    David

Viewing 9 posts - 16 through 23 (of 23 total)

You must be logged in to reply to this topic. Login to reply