SQL Server Monitoring Tool

  • Hi Guys!

    We are working in .Net and database in SQL Server. I want to monitor my SQL Server completely like following things must be known:-

    1) Username

    2) Name of SP where he has made a change

    3) Change in SCHEMA ; table , column and etc ....

    4) Machine name

    5) Other Activities/Parameters as well

    PS:Any functionallity/Tool ...

    Looking forward for your acknwoledgement.

    Thanks and regards

    Waseem Bukhari

  • waseem.shahzad 45937 (10/2/2011)


    Hi Guys!

    We are working in .Net and database in SQL Server. I want to monitor my SQL Server completely like following things must be known:-

    1) Username

    2) Name of SP where he has made a change

    3) Change in SCHEMA ; table , column and etc ....

    4) Machine name

    5) Other Activities/Parameters as well

    PS:Any functionallity/Tool ...

    Looking forward for your acknwoledgement.

    Thanks and regards

    Waseem Bukhari

    what about creating a DDL trigger which can log all DDL changes to a table?

  • I couldn;t understand your point. Pls elaborate a bit.

    Thanks in advance.

  • this this link

    http://msdn.microsoft.com/en-us/library/ms175941.aspx%5B/url%5D

    What i wanted to say that you could create a DDL trigger so that whenever a user performs any structure/schema change, you can either prevent it or you can log the activity in some activity log table.

  • waseem.shahzad 45937 (10/2/2011)


    Hi Guys!

    We are working in .Net and database in SQL Server. I want to monitor my SQL Server completely like following things must be known:-

    1) Username

    2) Name of SP where he has made a change

    3) Change in SCHEMA ; table , column and etc ....

    4) Machine name

    5) Other Activities/Parameters as well

    PS:Any functionallity/Tool ...

    Looking forward for your acknwoledgement.

    Thanks and regards

    Waseem Bukhari

    What about source control/versioning?

    Red-Gate has a nice source control plugin you can find here: http://www.red-gate.com/products/sql-development/sql-source-control/.

    I find tracking changes in dev more sensible than doing it in production.

    Hope this helps

    Gianluca

    -- Gianluca Sartori

  • Really Appreciated.

    Redgate is Not freeware 🙁

    Any Freeware Solution for this scenario? Monitoring tool?

  • waseem.shahzad 45937 (10/3/2011)


    Redgate is Not freeware 🙁

    Yes, but quite cheap. Give it a try.

    -- Gianluca Sartori

  • This is the issue of version / change management. What exactly you need is a process to manage the changes in database and it should be strictly followed at the time of deployment. Versioning tools will be helpful if the process is defined.

    In our organization we are using TFS (Team Foundation Server) as a source control tool for .Net & SQL Server code. I am not sure about its costing but you may further investigate on it.

  • Since what you're really interested is monitoring the change of your DB schema, take a look at this one:

    http://nobhillsoft.com/randolph.aspx

    it was created exactly to do what you're asking. true, it costs money, but not much, and in few minutes you will have it up and running, problem gone

  • I can provide you with a free alternative to track:

    - When

    - Server

    - Database

    - Command issued (i.e. ALTER/DROP/CREATE, etc)

    - Which Application was used

    - spid

    - Object

    - Login (either NT user or sql login)

    - Workstation the change was made from

    - Client Process ID

    I've created a quick/small SQL Agent job that parses the profiler trace files on each monitored server and stores the results to a table which is easily queried.

    Let me know if you'd like the scripts

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • It could also be done with DDL triggers (I have done this in the past), but this is not the point.

    When it comes to source control, I must admit that people smarter than I am found a perfectly neat solution many years ago.

    Why should I re-invent the wheel and come up with my home baked solution?

    Wouldn't it be better if I sticked to the standards?

    I suggest that you read this (free) e-book from the "Books" section of the site: http://qa.sqlservercentral.com/articles/books/71472/

    I know that this site is owned by Red Gate, but it's definitely worth reading.

    -- Gianluca Sartori

  • Reaally, it is a big article to establisgh all your monitors theres ..But anyway within SQL Server 2008 + 2008 R2 ...You have great new monitoring technologies that let you dispense SQL profiler which was lead to a culprit for performance within 2005.

    Let us emphasis first on how to monitor expensive queries as below:

    They could be audited automatically through variety of options:

    Using SQL Server 2008 Activity Monitor where you could find cached expensive queries.

    But not sufficient for a comprehensive auditing since recycling of expensive queries is there and you mightn’t be able to track old ones.

    Use SQL Profiler >>> Performance Event >> Performance Statistics >> select all columns

    But this is also not sufficient from 2 perspectives:

    Profilers almost post more overload on production DB servers + Master DB + yield to a performance degradation which might range from 50 % -100% .

    Not easily tracked particularly for huge OLTP transactions

    DMVs which is the best practice and to utilize it easily + operationally to audit long history of expensive queries , you could follow the below scripts by the same order :

    USE [msdb]

    CREATE TABLE [dbo].[Exp_table](

    [EXP_Query] [nvarchar](max) NULL,

    [Time] [datetime] NULL,

    [Elapsed_Time] [int] NULL

    ) ON [PRIMARY]

    USE [msdb]

    DELETE FROM EXP_TABLE WHERE DATEDIFF (D,TIME, GETDATE())>n

    --n = no of days needed to keep audting results

    USE [msdb]

    insert into msdb.dbo.Exp_Table SELECT

    (SELECT text FROM sys.dm_exec_sql_text(qs.sql_handle)) AS query_text,qs.last_execution_time, qs.last_elapsed_time/1000000

    FROM sys.dm_exec_query_stats AS qs

    WHERE last_elapsed_time >3000000

    and qs.last_execution_time not in (select time from [msdb].[dbo].[Exp_Table])

    order by last_elapsed_time desc

    And then you could schedule the last 2 scripts in one scheduled job to run by any polling interval like 5 minutes.

    Advantage of this solution:

    Precise auditing of all possible expensive queries every millisecond.

    Long retention period of auditing which might be for months or years.

    The least consume of resources since no CPU overload or Storage capacity needed at all for it.

    More accessible b first class objects ( SQL commands ) to be filtered easily by an time interval needed

    [font="Arial Black"]Performance Guard[/font]

    [font="Courier New"]Just ask me to get 0 sec for all queries[/font]

  • Performance Guard, I think you misread the question.

    We're not talking about performance monitoring, but (mostly) DDL change tracking.

    -- Gianluca Sartori

  • SQL Server has a default "black box" trace that's turn on by default and tracks basic things (incredibly low overhead on the server, unlike creating your own traces from the templates)

    If you' just looking to track DDL changes like who last created/edited a stored-procedure, table, or index you can use this code :: it's very simple and straight-forward:

    USE [YourDB]

    GO

    CREATE PROCEDURE [dbo].[dba_TrackSchemaChanges] AS

    /*

    ----------------------------------------------------------------------------------------------------------------

    Purpose : Tracks Schema changes across all DB's, logs to a central table in F1Settings on NAS2-DBR

    Department : DBA

    Created For : MyDoggieJessie

    ----------------------------------------------------------------------------------------------------------------

    NOTES :

    ----------------------------------------------------------------------------------------------------------------

    Created On : 09/19/2010

    Create By : Google Search & MyDoggieJessie

    ----------------------------------------------------------------------------------------------------------------

    Modified On :

    Modified By :

    Changes :

    1.

    ----------------------------------------------------------------------------------------------------------------

    EXEC dbo.dba_TrackSchemaChanges

    select * from DBSchemaChangeMonitor

    */

    --DECLARE VARIABLES

    BEGIN

    DECLARE @d1 datetime2

    DECLARE @diff int

    DECLARE @curr_tracefilename varchar(500)

    DECLARE @base_tracefilename varchar(500)

    DECLARE @indx int

    DECLARE @SQL varchar(750)

    DECLARE @Cnt int

    END

    --SET VARIABLES

    BEGIN

    IF @@SERVERNAME = 'CustomServerNameWhere the Install path was different'

    SELECT @curr_tracefilename = 'E:\SQL\Data\MSSQL10.MSSQLSERVER\MSSQL\log\'

    ELSE

    SELECT @curr_tracefilename = PATH FROM sys.traces WHERE is_default = 1

    SET @curr_tracefilename = REVERSE(@curr_tracefilename)

    SELECT @indx = PATINDEX('%\%', @curr_tracefilename)

    SET @curr_tracefilename = REVERSE(@curr_tracefilename)

    SET @base_tracefilename = LEFT(@curr_tracefilename, LEN(@curr_tracefilename) - @indx) + '\log.trc'

    END

    --CREATE TEMP TABLES

    BEGIN

    DECLARE @temp_trace TABLE (

    obj_name nvarchar(256) COLLATE database_default

    , database_name nvarchar(256) COLLATE database_default

    , start_time datetime2

    , event_class int

    , event_subclass int

    , object_type int

    , server_name nvarchar(256) COLLATE database_default

    , login_name nvarchar(256) COLLATE database_default

    , application_name nvarchar(256) COLLATE database_default

    , ddl_operation nvarchar(40) COLLATE database_default

    , spid int

    , clipid int

    , host nvarchar(40) COLLATE database_default

    )

    END

    /* ######################################### START MAIN PROCEDURE HERE ########################################## */

    BEGIN

    INSERT INTO @temp_trace

    SELECT

    ObjectName

    , DatabaseName

    , StartTime

    , EventClass

    , EventSubClass

    , ObjectType

    , ServerName

    , LoginName

    , ApplicationName

    , 'temp',

    spid,

    ClientProcessID,

    HostName

    FROM ::fn_trace_gettable( @base_tracefilename, DEFAULT )

    WHERE

    objecttype not in (21587)

    AND EventClass in (46,47,164)

    AND EventSubclass = 0

    AND LoginName NOT IN ('YOURDOMAIN\ProxyAccount', 'YOURDOMAIN\dba1', 'NT AUTHORITY\NETWORK SERVICE', 'YOURDOMAIN\me')

    AND DatabaseID <> 2

    AND StartTime NOT IN (SELECT Captured FROM F1Settings.dbo.DBSchemaChangeMonitor)

    SET @Cnt = @@ROWCOUNT

    /* Process Records */

    IF @Cnt > 0

    BEGIN

    /* Update events to be understandable */

    UPDATE @temp_trace

    SET ddl_operation = 'CREATE'

    WHERE event_class = 46

    UPDATE @temp_trace

    SET ddl_operation = 'DROP'

    WHERE event_class = 47

    UPDATE @temp_trace

    SET ddl_operation = 'ALTER'

    WHERE event_class = 164

    /* Fetch the Results */

    INSERT INTO F1Settings.dbo.DBSchemaChangeMonitor

    SELECT

    start_time,

    server_name,

    database_name,

    ddl_operation,

    '[' + CAST(object_type as varchar (6)) + '] ' + application_name,

    spid,

    obj_name,

    login_name,

    clipid,

    host, GETDATE()

    FROM @temp_trace

    WHERE --object_type not in (21587) -->> This is Table Statistics (not needed)

    obj_name NOT IN ('abc') -->> This was a table that gets dropped/recreated each time a proc runs

    --AND server_name <> 'LOU2-MSSQL1'

    ORDER BY

    server_name, start_time DESC

    END

    END

    /* ########################################## ENDe MAIN PROCEDURE HERE ########################################### */

    /*

    EXEC dbo.dba_TrackSchemaChanges

    */

    Create a job on your server that runs on a schedule that logs all the changes to a table...

    CREATE TABLE [dbo].[DBSchemaChangeMonitor](

    [Captured] [datetime2](7) NOT NULL,

    [Server] [nchar](25) NULL,

    [DBName] [nchar](50) NULL,

    [Command] [nchar](50) NOT NULL,

    [Application] [nchar](500) NULL,

    [spid] [int] NULL,

    [Object] [nchar](500) NULL,

    [Login] [nchar](50) NULL,

    [ClientProcessID] [int] NULL,

    [WorkStation] [nchar](50) NULL,

    [InsertedOn] [date] NULL,

    CONSTRAINT [PK_DBSchemaChangeMonitor] PRIMARY KEY CLUSTERED

    (

    [Captured] ASC,

    [Command] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 92) ON [PRIMARY]

    ) ON [PRIMARY]

    GO

    When you're wanting to see the data, simply query the table. Agreed it's not the best solution in the world but it's freakingly easy to implement and did I mention it's free? 🙂 I have it running across several servers to help isolate those development incidents where everyone says the same "It wasn't me, I didn't change it..." scenarios.

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • Good timing on this article arriving in my inbox this morning, it references what Gianluca was referring to, also easy to set up (and is also free).

    http://www.mssqltips.com/sqlservertip/2085/sql-server-ddl-triggers-to-track-all-database-changes/

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply