Dashboard style monitoring tools

  • Hi,

    I wondered if anyone could recommend a monitoring tool for SQL Server? At the moment I have around 30 servers to monitor, and although I manage this fine through Enterprise Manager for most things there are a couple of areas that would be a lot quicker if I could have an 'at a glance' kind of view. These things are filegroup sizes, particularly log files, failed jobs, and the last few hours of the SQL Server error log.

    I was thinking of a dashboard kind of thing. It is mainly because out of the 200 ish databases we have there are around 40 whose log files grow inordinately, and we have to shrink them from time to time. As far as I'm aware this is a bit of a known error in SQL Server and can only be fixed by going into Simple recovery mode which isn't an option for us. So, I would love to see on a daily basis which files are big and need to be dealt with. Along with the other things I've mentioned....

    Any ideas would be great.

    Thanks,

    Paula.

  • we use litespeed...spotlight and PA(Performance Analysis)

    edit :-

    spotlight for live monitoring

    PA for analysis of past.

    ----------
    Ashish

  • Thanks for this. Spotlight looks pretty good although I want to be able to see the actual sizes of files next to one another, as in the size of the datafile and the size of the logfile. That way I could easily identify whether the log was an unreasonable size for its database. Just looking at the amount of physical space used on the server is not really good enough. Although that will give me confidence that the server isn't going to get full without me knowing, I want to catch these giant logs before they affect the individual database performance.

  • for this....you can configure the threshold limit for sql files and similarly threshould for physical drive OS counters which will generate the alerts/message as soon threshold limit is about to reach and crossed

    ----------
    Ashish

  • There are many excellent monitoring tools out there. Most of them get the job done. Mainly there are differences in approach and output. The thing to do is explore which ones do what you like in a way you like for a price you can afford.

    Personally, I strongly recommend Red Gate SQL Monitor (disclosure, I work them and my job is to sing their praises). It's a great little tool. It uses a web-based dashboard to do exactly the type of monitoring and alerts that you're describing, plus the price point is pretty darned good. You can even see the tool in action because it's used to monitor SQL Server Central and we expose that monitor to everyone. Check this out[/url].

    But, if you have some ideas for which monitoring tools you might like, I've tried all the major ones and can tell you which ones are better for certain scenariios. For example, if you're primary concern is performance monitoring, especially with a goal to fixing badly behaving procedures, you might want to check out Confio Ignite. They've focused more in that area and less in keeping track of disk space and that sort of thing.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Thanks for this. I'm not sure what I'm looking for is a very common need. I don't really care about the amount of space left on the server (we have tools already for that sort of thing), I just want to have something that gives me an easy view of the size of log files and data files. When I use Enterprise Manager, I can right click on the database name, go to Properties and then Files and see how big in Mb the database files are. If the log file looks bigger than I expect or far too big for the size of the data datafile then I run a script to shrink it. My problem is that I have to connect to 30 servers, 200 databases and then pcik through the ones I think might be problematic, then go in, check size and shrink. It's just laborious and I thought if I could see a list of these sizes I could easily pinpoint which ones need shrinking on a daily basis.

    And on top of that a quick log check and list of failed jobs.....

  • I like the Red-Gate tool as well due to the alerts that it provides. I also like, and use, SQL Sentry, which in my opinion has the best disk views out there that I have seen (and I have not spent a lot of time with them all). The reports that they offer do cover what I think you are asking for as well.

    Note: I don't get paid by either Red-Gate or SQL Sentry. 😛

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • Thanks I'll check out SQL Sentry.

  • Paula-196779 (2/15/2011)


    Thanks for this. I'm not sure what I'm looking for is a very common need. I don't really care about the amount of space left on the server (we have tools already for that sort of thing), I just want to have something that gives me an easy view of the size of log files and data files. When I use Enterprise Manager, I can right click on the database name, go to Properties and then Files and see how big in Mb the database files are. If the log file looks bigger than I expect or far too big for the size of the data datafile then I run a script to shrink it. My problem is that I have to connect to 30 servers, 200 databases and then pcik through the ones I think might be problematic, then go in, check size and shrink. It's just laborious and I thought if I could see a list of these sizes I could easily pinpoint which ones need shrinking on a daily basis.

    And on top of that a quick log check and list of failed jobs.....

    Wait, let's stop talking monitoring for a second. Why are you shrinking logs on a regular basis? If the logs are of a certain size, barring some huge mistake, they're that size for a reason. Plus, shrinking over & over leads to major fragmentation on your disk. Fragementation that you can't easily fix without shutting down your servers. Fragmentation that will absolutely hammer performance.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • I realise this but there seems to be a problem with SQL Server (known it would seem if you google around) where sometimes and as far as I can say it is entirely random, a database log file grows way beyond where it needs to be. The only thing to do is to shrink it. I have literally created a database inh the past for a small system and within a month the log is Gbs whilst the data and indexes are still in Mbs. We backup the database and logs every day, but some just don't seem to respond. We've had this for 5 years on all version of SQL Server.

  • Paula-196779 (2/15/2011)


    I realise this but there seems to be a problem with SQL Server (known it would seem if you google around) where sometimes and as far as I can say it is entirely random, a database log file grows way beyond where it needs to be. The only thing to do is to shrink it. I have literally created a database inh the past for a small system and within a month the log is Gbs whilst the data and indexes are still in Mbs. We backup the database and logs every day, but some just don't seem to respond. We've had this for 5 years on all version of SQL Server.

    Were you taking log backups as well? They won't release the space otherwise.

  • Paula-196779 (2/15/2011)


    I realise this but there seems to be a problem with SQL Server (known it would seem if you google around) where sometimes and as far as I can say it is entirely random, a database log file grows way beyond where it needs to be. The only thing to do is to shrink it. I have literally created a database inh the past for a small system and within a month the log is Gbs whilst the data and indexes are still in Mbs. We backup the database and logs every day, but some just don't seem to respond. We've had this for 5 years on all version of SQL Server.

    Not sure where on Google you were finding that log growth is random or problematic in the sense of erroneous behavior but in truth log growth is definitely associated with activity. With that being said, if you are seeing periods of log growth then you are seeing higher activity during those times which necessitates it.

    There are a couple solutions here; first would be to take more frequent log backups during those times (which of course requires more log backup space); second would be to consider using SIMPLE recovery model IF you don't need to recover to a point in time. Third is to add more space to the volume that the logs reside on and plan for the growth. If you do need point in time recovery then the best option is option 1, taking more frequent log backups.

    One last thought too would be to write a little script that checks the log file usage and provide yourself some alerts when you are starting to see them get full. Watch that for a while and see how often you need to take a log backup. Check out dbcc sqlperf(logspace). I think you will find that helpful.

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • Thanks very much for that, I think we do need to look at taking more frequent log backups. We currently do daily but some of our systems are large and probably need it more often, especially those that have large batch schedules.

  • Paula-196779 (2/15/2011)


    Thanks very much for that, I think we do need to look at taking more frequent log backups. We currently do daily but some of our systems are large and probably need it more often, especially those that have large batch schedules.

    Logs are direclty associated with activity. It's not random. Even when I was talking about something going wrong, I meant someone ran an udpate without a where clause or there was a new insert of millions of rows in a single transaction or something. You should not be seeing just random increases in the log.

    Once a day with FULL recovery? That's absolutely not frequent enough. Think about it like this. Your ability to do a point in time recovery of the data without a tail log backup is completely dependent on the last time you did a log backup. You have to ask the business, how much data are you prepared to completely lose, non-recoverable, it's just gone? I'd be surprised if they were willing to part with more than 1/2 an hour. That's your starting point for log backups. Now most businesses first response is going to be 0 time lost of course. So you tell them how much the mirroring and san mirroring and all the other mechanisms for a near zero (cause zero is impossible) loss set up will cost and you start the negotiations from there. Most of the systems I used to manage were on a 15 minute log backup. A few were on 10.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Have you heard of SQL Stripes?

    It may be the monitoring solutions you're looking for

    (www.sqlstripes.com)

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply