Another Back up Question...

  • Hi

    I am running a maintenance plan at which runs at 12.01am on a MS CRM DB

    This maintenance plan has the following steps in this order

    Check DB

    Re org Index’s

    Update Stats

    Full back up

    This runs fine

    I then have a log back up job which runs every two hours and does a back up at 2am. This 2am back up is huge and far bigger than the actual DB size

    I’m gathering it is so large due to the index Re-org as the other log backs than run for the rest of the day are a lot lot smaller.

    Are there any ways of reducing the size of the 2am log back up as i thought by doing the full back up after the re-org that future log back would only contain changes after the full back up

    Thanks

  • Full and Differential backups do not truncate the transaction log, that is only done by the transaction log backup.

  • so is there any way of stopping all of the index re-org stuff being written to the log, as this stuff is not really needed, and just making the log Huge

    A normal Log back up is less than a half a gig while the log back up after the re-org is around 20 gig

  • Nope. You might be able to switch to BULK_LOGGED recovery model during the index rebuild/reorg, but that isn't going to really affect the size of the t-log backup file.

  • i was thinking of doing the following to combate this

    normal log back up at 12.00am

    The maintenance plan runs after this

    which is

    Db check

    Index re-org

    Log back up - To seperate folder

    Full back up

    Then a clean up task which deletes the log

  • nigelc (5/27/2010)


    i was thinking of doing the following to combate this

    normal log back up at 12.00am

    The maintenance plan runs after this

    which is

    Db check

    Index re-org

    Log back up - To seperate folder

    Full back up

    Then a clean up task which deletes the log

    If you delete the log file created between the two full backups, and the second full backup file becomes corrupt or damaged in some way, you just broke your t-log chain and won't be able to restore to a point in time using the first full backup file.

    Personally, I wouldn't fret the small stuff, and the first t-log being fairly large is actually small stuff in the grand scheme of database recovery!

  • Its more the fact that we only have so much space and that this large log back up is consuming a large amount of it. Its is the difference between having the maintence plan automated rather than having to constantly monitor space

    thanks

  • Big question, why rebuild/reorg every index in the database everytime? If you search SSC you will find several scripts that will allow you to selectively rebuild/reorg indexes as needed.

    This is what I do on our LOB systems.

  • What sizes are we talking about? I'm curious what "big" and "huge" are on your system.

    I second Lynn's suggestion. You don't need to rebuild all indexes every day.

  • They are not huge in in relation to other DB's out there but in relation to itself (if you get what i mean),

    The db is around 14gig

    and the Trans log backup done every 2 hours is up to a gig but usually around 200MB.

    The Trans log back up done after the index re-org was 24gig the previous night and 17gig last night

  • Our SIS database was using the BULK_LOGGED recovery model until a couple of months ago when I took over responsibility for the database. I changed the recovery model to FULL. After that I noticed that the transaction log file would grow to 32GB in two days. I discovered that the former DBA was rebuilding every index on every table every night. The process ran 75 to 80 minutes daily. I modified the process to only deal with indexes that were more than 10% fragmented. If the fragmentation was > 10 and < 30, the process does a rorganization, and if it is fragmented 30 or > it would rebuild the index. This process runs in 5 minutes or less.

    If your t-log is so large it is because you are rebuilding or reorganizing every index in your database. If you search SSC you will find several procedures that you can use t do the same thing for your databases.

  • Thats great thanks for all your help

    🙂

  • That sounds high to me. I would look at building a reindexing procedure that only reindexes those tables that need it. There are lots of scripts on this site that check the level of fragmentation and then only rebuild certain tables.

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply