Mini-Me

  • Yes, typically, I would have jobs on the various servers in Agent and they would normally email (we used SQLAnswersMail on SQL Server 2000 because it has nice HTML formatting and allows you to a lot of flexibility).

    The jobs would email the appropriate people when they detected problems. Some would be daily, weekly or monthly. The emails were always designed to go to the people who would fix the problem and be sent on behalf of the people who could answer any questions (sometimes these are the same person!)). Preferably the bad data was a link to a web based application's maintenance page where the problem could be corrected.

    I always liked to ensure that the addresses for the recipients were either a mailing list or stored in a table (or sometimes the email address was extracted from the system - like the person who was the rep for the customer) so that the job never needed to be changed for simple job assignment changes. In cases wherre the rep for a customer had to do something, there was a fallback so that if the rep's email address was NULL (or the rep was terminated), it would go to the department head (and then, if that wasn't available, to a DBA). So even the jobs were anticipating problems with the integrity of their email configuration, to some extent.

  • And another thing - being proactive, I would never do this stuff twice. If I had to investigate a problem (and couldn't easily prevent it with a CONSTRAINT or something), at the very least, I would set up a job to email me about it - even putting the investigatory SQL I would need to figure it out in the email. The next time that job detected the problem, I would ramp it up into a full-blown system (based on that SQL) to identify who should be fixing these problems and give them as much information as they needed to get it fixed.

  • We maintain a database called the Data Quality Engine (DQE) into which an operator can insert various kinds of tests that target data either on our SQL Server or Oracle systems. Most tests are run in a daily job and those results that fall outside certain boundaries are added to an email alert.

  • We have a fairly large Access application with a SQL2000 back-end (about 120 users) which is one of our core systems - local government to manage properties, rates, and all sorts of applications (building consents, resource consents, etc.).

    It's quite a configurable application, which is great in allowing us to configure it for all sorts of processes, but it's not so good at enforcing data integrity or ensuring users follow proper processes.

    We have created a wide variety of audit scripts (I think we're up to about 200 now) that check all sorts of data entry, relationships, etc. Each can be scheduled to run daily/weekly/monthly, on any given day, fire emails back to the users to notify them to fix the data, and escalate the issues if they are left unresolved.

    I'm sure there are plenty of tools around to do this, but we just created a simple DTS package with a couple of control tables and a series of scripts. Saves us heaps of time and helps identify repeat offenders, who might need further training, or just a damn good thrashing.

    Chris

  • I regularly write SQL queries to check for certain conditions in my data. We've developed a system used to automate the calculation of Teacher Bonuses that are based on Student Assessment Scores. The system basically is dependent on the right data being in the right place at the right time, which doesn't always happen. Therefore, we use a number of SQL queries to identify conditions such as mismatched data or missing critical data. Although the development of the queries consumed a lot of time, the queries continue to pay great dividends when used in a proactive manner.

  • Wow - I am very happy to see that there are others that have jobs to report the health of the system. I had to set these up years ago because I was the only DBA and the environment was growing faster than I could keep up with maintenance.

    I have daily e-mails that I check in the morning to very jobs, data integrity and db/server space.

    My moto is to be proactive rather than reactive. The only way to do this is to track what is going on and try to fix problems before users know there are any.

    Barb

  • Good to see that many people build their own "mini-mes" to help them out.

    And some interesting solutions. I might look to modify some of our systems based on what's in here.

  • I see many of you have far more experience in that domain that I'll ever have.

    I have a new client with a hugely critical server for their system (central server for the world, with 1000s of employees connected 365/24/7).

    I'd love the experts in the domain to share the scripts they use for prevention and DB maintenance so that I can implement them without risking the errors of the learning curve I have to take in there.

    Some instructions as well could be interesting (this runs 3 times a day because...).

    Thanks in advance.

  • Steve Jones - Editor (11/24/2008)


    Good to see that many people build their own "mini-mes" to help them out.

    And some interesting solutions. I might look to modify some of our systems based on what's in here.

    We have a small database, dubbed "VirtualDBA", which we deploy on critical production servers. This was developed by a previous DBA (for SQL 2005 and potentially will work in SQL 2008) and is relatively straightforward to use. Once a server is created, we restore this small database to it and run a setup script that creates jobs that:

    - Audits Locks, Mirroring, general security

    - Runs daily and weekly maintenance jobs (such as reindexing, shrinking, etc.)

    - Automatically do transaction log backups every half hour

    - Monitor File IO/Indexes/Performance

    and other features. While the previous DBA was very good and we miss the expertise, this tool he left us is invaluable. If we need to turn off a feature, each job that is created (many made for each database, so a big database would have a lot of jobs) can be turned on/off via entries in configuration tables (via true/false on a column representing a feature being monitored). Any new databases added are detected by a specific monitoring job whose sole purpose is to check for newly added databases and add them to the configuration tables.

    This could be something useful anyone could try. Yes, you could have scripts deployed as is on each server, but how about when new databases are added and somehow missed. Having a monitoring database clearly distinguished from the others (it has a small footprint), that automates tasks you could easily miss. Using configuration tables will make the code more readable as it doesn't have to be coded into the job itself.

    Gaby
    ________________________________________________________________
    "In theory, theory and practice are the same. In practice, they are not."
    - Albert Einstein

  • Hey Gaby, any way you can zip a backup of that DB here and post it in the forums?

  • Ninja's_RGR'us (11/24/2008)


    Hey Gaby, any way you can zip a backup of that DB here and post it in the forums?

    It would be difficult to do so for a couple reasons. The first is more to do with intellectual property rights and the previous DBA may not want all his work in the public domain. The second is that we have no documentation on this, and while we can work with it, it makes heavy use of quite a few tables and would be hard to describe to anyone to use.

    However, here is a general framework of what needs to be done if you were to create your own MiniMeDBA.

    First, in this new Database, create a config_DB table which would have columns such as databasename, monitor (to actually monitor this DB, you may want to turn it off sometimes), transaction_log_backup, trans_retention_days, full_backup, full_retention_days, database_shrink, etc.

    Have a setup script part of a regularly run job (say once an hour or once a day), that scans databases and if not in this config table, adds it with default values you can set.

    You can have a table for backups, a table for maintenance, a table to audit locks and mirroring, etc. or keep it as one or two only. The key is they all reference this initial config table. If a column is turned on (or in our case set to true), that job is run.

    The job created, let's say transaction log backups for Database1, will run every 30 minutes no matter what. However, when the job runs, it refers to the transaction log column of the config table, and if that is set to false/off, that step is skipped. This way you don't have to worry about disabling jobs. They always run/are always enabled., but they refer to the config tables to see if that step actually fires or not.

    There are so many useful scripts on this and many other web sites as well as in the professional magazines, that an internal auditing database would be a logical way to tie them all together.

    I am thinking, as an exercise for myself, to create a simplified version of our VirtualDBA and hopefully I'll be able to post it for everyone.

    Gaby
    ________________________________________________________________
    "In theory, theory and practice are the same. In practice, they are not."
    - Albert Einstein

  • Maybe we'll get an article out of Gaby sometime 😉

  • Steve Jones - Editor (11/26/2008)


    Maybe we'll get an article out of Gaby sometime 😉

    Working on it, trust me. 😀 Will start with scheduling backups and general maintenance. I hope to have something by mid-December.

    Gaby
    ________________________________________________________________
    "In theory, theory and practice are the same. In practice, they are not."
    - Albert Einstein

  • Never hurts to apply a little more pressure. You've got fans now!

  • I second!

Viewing 15 posts - 16 through 30 (of 32 total)

You must be logged in to reply to this topic. Login to reply