sqlcmd / msdos admin monitoring script

  • Hi,

    I'm trying to write a little monitoring system to ease our admin time.  For example, I want to find out all the failed jobs on all our servers and have a report of it daily.  I worked on it for about an hour this morning and have a little test going with ultimate plan being this:

    1. Create a table with all of our servers listed

    2. Write a script to query that table and generate a batch script  that will run several sqlcmd calls and write the output to .txt files in a directory.

    3. Create a SSiS package that will read through all those files in that dir, import them and generate reports. 

    My issues:

    Right now I've got it detecting the error code from the sqlcmd call and writing an error message out to an errorlog - that's fine, I can generate a report on that. 

    I'd like to be able to just write the result of the sqlcmd to one big file so I could do 1 import - but that is not a big deal, thanks to the new looping in SSIS I believe.  

    I'd really like to be able to suppress error messages in my output file.  If I get an error, I get the actual sql error in my output file.  The SSIS probably won't like that.

    I'd like to extend this whole system and add another table to hold the sql scripts I would like to run so it would generate the .sql input file for the sqlcmd file.  The end result would be to add a server to the table and it would have any queries we like run against it. 

    Anyone have any tips for doing something like this?  Bad idea, good idea?   I'm sick of creating jobs on each server and DTSing everything together.  I suppose we could possibly create a linked server on each server to the monitoring server and create jobs on those that would insert this data to the link...but I'm looking for a centralized solution. 

     

  • I use dts packages with a connection for each server I want to monitor, I then have an execute sql task for each monitored server that collects whatever I am looking for and then have that piped to my monitor server via openrowset. Based on what you are looking for you can have the results going to loging tables that you have sps qery and send out emails if a flag is raised.

     

    --Used to insert into linked server

    insert

    into

    OPENROWSET

    ('SQLoledb',

    'server=servername;uid=youruser;pwd=yourpassword'

    ,

    'select blah, blah, blah from yourserver.dbo.yourmonitordatabase'

    )

    select

    blah, blah, blah from #tempresults table

  • Try this approach

    1. on your centralized monitor server, create a linked server pointing to each server that you want to monitor.

    2. Write a simple procedure that accepts a tablename and a SQL  statement to execute.

    e.g. Create proc GetDataFromLinkedServers @vcSQL varchar(4000), @tabName varchar(100)

    as

    begin

    -- Write a cursor to loop on sysservers.

    -- loop through each server, construct a dynamic SQL using Linked server which will execute the vcSQL passed by you and insert the result into local table on monnitoring server. @tabName

    -- you may also include columns such as servername, datacollectiondate etc. to ensure that your report can have historical data ...

    end

    go

    3. Execute the above procedure for all your report needs.

    e.g. GetDataFromLinkedServers 'select @@servername,getdate(),* from sysdatabases' ,'AllDatabases'

    4. Create your reprots

     


    -- Amit



    "There is no 'patch' for stupidity."



    Download the Updated SQL Server 2005 Books Online.

  • I have done similar to what Sam has done, but I am not using SSIS, instead I am using BULK INSERT.  I have everything in one proc :

    1. open a cursor through my server list

    2. run a SQLCMD (or isql or osql) against that server - input pullJobInfo.sql which in my case is on some location on C drive, but that could easily be a stored procedure as well

    set

    @cmd = 'isql -S' + @Machine + ' -dmsdb -E -w450 -h-1 -n -s"|" -b -i' + @SQLFile + ' -oe:\batch_admin\JobAuditBatch\PullJobInfo.dat'

    3. gather the "|" delimited output with -o option from above command

    4. Bulk insert that output file into some sort of stage table

    5. Delete all the garbage data:

    delete from #pullJobInfo where ServerName is NULL

    if exists (select 1 from #PullJobInfo where ServerName like 'Login failed%' or ServerName like 'DB-Library:%'

    or ServerName like 'Net-Library%' or ServerName like 'Msg %' or ServerName like 'connection%')

    select * from #PullJobInfo where ServerName like 'Login failed%' or ServerName like 'DB-Library:%'

    or ServerName like 'Net-Library%' or ServerName like 'Msg %' or ServerName like 'connection%'

    delete from #pullJobInfo where ServerName like 'Login failed%'

    delete from #pullJobInfo where ServerName like 'DB-Library:%'

    delete from #pullJobInfo where ServerName like 'Net-Library%'

    delete from #pullJobInfo where ServerName like 'Msg 18452%'

    delete from #pullJobInfo where ServerName like 'connection%'

    delete from #pullJobInfo where ServerName like 'Msg %'

    6. Insert the resulted data into a final table ( I call it a current table)

    7. I then transfer data into History table as well -for trend analysis.

    Hope this helps.

     

  • Vipal - this definitely helps, illuminates some techniques I did not know about (which is was I was hoping for!)

    do you execute your command with xp_cmdshell?

    thanks!

     

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply