maint plan degrade the perfmon

  • I have noticed that after assigning maintainence plans , my overall db server performance is degraded.I have scheduled all the maintainence plan in the night hours only.but still i m experiencing performance problem

  • What sort of tasks are you doing in your maintenance plan? If you're updating statistics what sample rate are you using?

    We had an issue once with the maintenance plan using a different sample than would otherwise be the case which meant that the optimizer decided to use less than optimal indexes.

  • I m not doing reindexing with the maint plan wizard.i have made a seperate job for thisn which is scheduled on a 15 day basis.

    I am using a sample of 30% for update stats .

    I have kept remove unused space from database files when it grow beyond to 5mb

    the max size of my data file is restriced to 50mb.and initial size was 5 mb at the time of creating the database.

    this part i have scheduled on a weekly basis ie on sunday.

    Now suggest me where I m wrong.

  • I m still confused abt  this..pls suggest

  • Hi there, sorry I was offline all weekend , shocking isn't it

    Lets see, first can you check that the auto-update stats is set for your database, if it is then you probably don't need to update them as part of a maintenance job (see the bit below before deciding).  I tend to be a control freak and want to know exactly when SQL is going to update the stats (historically SQL has always chosen the busiest time to update them when it was set to automatic (for valid reasons) which in turn made the system busier still), then again, I'm dealing with databases bigger than 1TB so it's not going to be a couple of seconds job to update them

    Anyway, back to what you asked about, If the statistics are being updated automatically then SQL by default is only using a 10% sample, if you're specifying 30% in the plan then this may be where the problem is comming in (sampling a different data range), all may be well for a small amount of time after the plan updates them but then sql could be comming along and updating them again with a 10% sample.

    If your data is fairly static in nature (mostly added to but rarely updated) then you might want to consider disabling the auto-update option and doing a nightly update stats via the maintenance plan, I'd suggest that you consider increasing the sample rate up to 100%, not for any other reason than that's the one that I use

    If however your data is dynamic then i'd suggest sticking with the auto-update stats option and not doing it in the maintenance plan.

    Hope this helped a little (sorry if I went off at a tangent slightly, it's been a busy weekend )

    Mike

  • Ok mike thats nice explanation about the stats .

    I am wondering  how much should I  keep "remove unused space from database files when it grow beyond" so that it fullfil my database needs.The max size of my datafile is restricted to 50 mb and intial size is 5mb with a file growth of 10%;max  size of translog is restricted to 100mb and initial size was 5mb with a file growth of 10%.

    How should I make this "unused space selection"

  • To be honest I only ever let the MDF file grow by itself under extreme circumstances (never had one yet), I tend to allocate enough storage for the next 6 months to cut down on file fragmentation, it also helps me with capacity planning (you'll need twice the size of the largest table anyway due to the reindex tasks).  Because of this I never remove unused space regardless of how much there is mainly because I set it at that point in the first place.

    With regard to the LDF file I backup every hour which means that it rarely gets above 2GB (normally it's around 1GB but you need to plan for maximum usage times), however, I've set this at 8 GB (4 times larger than the max) to allow for maintenance tasks to run properly (I still have the autogrow option enabled but don't really want it to ever happen).  I have some automated jobs that inform me when the logfile gets to 75% full and if for some reason it ever gets to 90% (only really seen this when doing an dbcc indexdegrag during the day) then it tells me and does an automatic backup, removing the need for it to grow further.

    In my opinion with a production system you need to be pro-active rather than re-active, letting things happen by themselves should be a last resort rather than a way of life, it's much easier to sit down every couple of months and plan storage requirements but you also need to have the monitoring working properly in order to trust it.

    Hope this helped

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply