Using SQL Profiler to Resolve Deadlocks in SQL Server

  • Jonathan Kehayias (2/1/2009)


    Comments posted to this topic are about the item <A HREF="/articles/deadlocks/65614/">Using SQL Profiler to Resolve Deadlocks in SQL Server</A>

    well explained and very helpful. Thanks for sharing

    ===========================================
    Better try and fail than not to try at all...

    Database Best Practices[/url]

    SQL Server Best Practices[/url]

  • Hello everybody.

    Please explain to me one thing. In the article said that I have to run the script to get execution plan, but in BOL I don`t see any information about procedure "BookMarkLookUpSelect". I have SQL 2005 with SP2 on all of my servers. So I whant to understand when and where I have to run this script.

    Thanks.

  • @maxibeck,

    check the attachements given at the end of the article.

  • Great article Jonathon! I look forward to the next SQL Saturday in Tampa so i can sit in our your presentations.

    Think great, be great!

  • sanokistoka (10/22/2009)


    Have you tried Oracle, which doesn't exhibit, say SELECT/UPDATE deadlocks because of row versioning? Thankfully, there is now row versioning in MSS. The point being is that, yes an application must be coded to handled deadlocks it expects *where* it expects them, but MSS is causing an egregious number of unexpected deadlocks, and I wonder if that's due to page locking and/or lock escalation policies. What's your experience with Oracle and how does it compare with MSS when it comes to unexpected deadlocks?

    i forgot the issues with row versioning, but in my experience deadlocks are caused by management issues or poor schema.

    few years ago our devs changed an app that hit a 200 million row table. at the same time we saw a huge increase in data. they went from single thread to multi-thread. next thing you know we're seeing different threads of the app block each other. i had an idea to change the clustered index so as not be in in accordance with best practices. reason being that the main query of the app selected up to 7 million rows in some cases. with the current index structure it locked every page of the table. we made this change and it fixed all our blocking issues.

    management is a lot of times people will access data with MS Access and lock the entire table. instead of using replicated data people think they are special and want to access data that is always being modified.

  • Nice article. Just wanted to point out that you can generate an XDL file from the Profile trace window just by right-clicking the Deadlock Graph event. Choose the Extract Event Data option from the context menu, and a Save As... dialog box pops up allowing you to save the .xdl file for this single event. I use XML Notepad to crack the XML open. (not that I have lots of experience doing this :-))

  • It's nice to see good articles like this re-published from time to time.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • This is my first time reading the article and although you do state,

    The deadlock in this type of scenario is completely timing based, and it can be difficult to reproduce without a looping execution as used in the attached demo.

    The example you give is very simple and I think a very common scenario. Unfortunately I can't agree with the solution. No I'm not talking about the application handling the deadlock exception and retrying. That's been argued enough already and I actually agree that the application should handled the exception (what it wants to do with it is another story).

    I'm questioning the covered index solution. I know with most things computers, the answer is "it depends". So I'd be curious to get some feedback on the following scenario.

    We don't have a very busy database in my opinion, but we do experience a lot of deadlocks. Unfortunately the client application uses NHibernate, which I feel is a major part of the problem. But all that aside, let's say we have the following table and requirements:

    CREATE TABLE Trip(

    id int IDENTITY(1,1) NOT NULL,

    userID int NOT NULL,

    name varchar(200) NOT NULL CONSTRAINT DF_Trip_name DEFAULT(''),

    descriptionText varchar(max) NOT NULL CONSTRAINT DF_Trip_summary DEFAULT(''),

    ulLat float NULL,

    ulLon float NULL,

    brLat float NULL,

    brLon float NULL,

    created datetime NOT NULL CONSTRAINT DF_Trip_created DEFAULT(getdate()),

    lastUpdated datetime NOT NULL,

    startDate datetime NOT NULL,

    viewCount int NOT NULL CONSTRAINT DF_Trip_viewCount DEFAULT((0)),

    sourceDevice varchar(50) NOT NULL CONSTRAINT DF_Trip_sourceDevice DEFAULT(''),

    sourceCarrier varchar(50) NOT NULL CONSTRAINT DF_Trip_sourceCarrier DEFAULT (''),

    sourceManufacturer varchar(50) NOT NULL CONSTRAINT DF_Trip_sourceManufacturer DEFAULT(''),

    poiCount int NOT NULL CONSTRAINT DF_Trip_poiCount DEFAULT((0)),

    mediaCount int NOT NULL CONSTRAINT DF_Trip_mediaCount DEFAULT((0)),

    startLocationLatitude float NULL,

    startLocationLongitude float NULL,

    startLocationQuadKey varchar(50) NOT NULL,

    CONSTRAINT PK_Trip PRIMARY KEY CLUSTERED

    (

    id ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON PRIMARY

    ) ON PRIMARY

    GO

    CREATE NONCLUSTERED INDEX IX_Trip_startDate_userID ON Trip

    (

    startDate ASC,

    userID ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON PRIMARY

    Many inserts happen on this table and although the 'descriptionText' field is defined as varchar(max), the application limits it to 64KB.

    Now, let's say that the only SELECT statement that goes against this table is to return a particular user's most recent trips.

    SELECT * FROM Trip WHERE startDate > getdate() - 1 AND userID = @userid

    The execution plan for that query is, I think, the best it can be. It properly does an Index Seek on the IX_Trip_startDate_userID index to satisfy the WHERE clause, and then does a Key Lookup using the clustered primary key index PK_Trip to get all the data.

    So given your example, all that has to happen for a deadlock is the SELECT starts and grabs a lock on IX_Trip_startDate_userID, but before it can get a lock for the key lookup, an INSERT starts and grabs the lock on PK_Trip.

    It seems to me that the database should be smart enough to analyze the SELECT query and recognize that it cannot acquire all the necessary data using the IX_Trip_startDate_userID index alone and should know and therefore acquire a lock on PK_Trip at the same time.

    I think the solution of creating a covering index in this scenario isn't a very good one since you'd be duplicating all that data, particularly the 'descriptionText' field.

    Does anyone have any suggestions or comments? Either on my example scenario or why the database isn't smart enough to acquire the locks that it needs all at the same time. I understand that in order to have a highly performant, highly concurrent system, you need to minimize the use of locks, but your article just surprised me on how seemingly simple it is to generate deadlock in SQL Server by doing something so common and what the database is supposed to be designed to handle, that is, concurrent INSERTs and SELECTs.

  • kevin77 (6/5/2010)

    Does anyone have any suggestions or comments? Either on my example scenario or why the database isn't smart enough to acquire the locks that it needs all at the same time. I understand that in order to have a highly performant, highly concurrent system, you need to minimize the use of locks, but your article just surprised me on how seemingly simple it is to generate deadlock in SQL Server by doing something so common and what the database is supposed to be designed to handle, that is, concurrent INSERTs and SELECTs.

    While I'm not the most experienced when it comes to deadlocks (although reading this article has caused to be become smarter in the subject for some reason:cool:), I do support a database that gets them frequently. Now most of them are short timed deadlocks but every now and then one occurs that will bring the database to a hault. Some instances (well most) are hardware related, which it is on a VM and shouldn't be (which you can check Jonathan's blog on plenty of articles to support this for me;-))

    However, this application does 10 times more reads than it does writes (updates/inserts/etc), which the major tables have millions of rows of data. So to prevent heavy deadlocking they simply add (NO LOCK) on their SELECT queries since they are reading from a few tables that are consistently being updated or inserted into.

    Which the software is a BMC product, I leave it to the peanut gallery to guess which one. :hehe:

    Shawn Melton
    Twitter: @wsmelton
    Blog: wsmelton.github.com
    Github: wsmelton

  • MeltonDBA (6/7/2010)


    So to prevent heavy deadlocking they simply add (NO LOCK) on their SELECT queries since they are reading from a few tables that are consistently being updated or inserted into.

    Cool, I'd be fine with that solution in my scenario. Unfortunately, I don't think you can tell NHibernate to add that hint. (Yet another reason not to use ORM crap.) But some day we'll hopefully get rid of NHibernate.

    Thanks.

  • I remember where I've seen mention of nHibernate now...Grant Fritchey (aka ScaryDBA) talks about it on his blog:

    http://scarydba.wordpress.com/2010/04/05/nhibernate-first-look-at-tsql/[/url]

    Which I believe his post is exactly what you maybe seeing.

    Shawn Melton
    Twitter: @wsmelton
    Blog: wsmelton.github.com
    Github: wsmelton

  • @kevin77:

    it seems to me that the database should be smart enough to analyze the SELECT query and recognize that it cannot acquire all the necessary data using the IX_Trip_startDate_userID index alone and should know and therefore acquire a lock on PK_Trip at the same time.

    I think the solution of creating a covering index in this scenario isn't a very good one since you'd be duplicating all that data, particularly the 'descriptionText' field.

    Does anyone have any suggestions or comments? Either on my example scenario or why the database isn't smart enough to acquire the locks that it needs all at the same time. I understand that in order to have a highly performant, highly concurrent system, you need to minimize the use of locks, but your article just surprised me on how seemingly simple it is to generate deadlock in SQL Server by doing something so common and what the database is supposed to be designed to handle, that is, concurrent INSERTs and SELECTs.

    The query still takes row level locks. The scenario shown by Johnathon

    applies when one attempts to insert and attempts to update the same row ( or group of rows ) and the timing for two conflicting operations is perfect up to some milliseconds. ie the update should occur exactly between query engine applies its locks at NC Index and before it obtains lock on clustered Index. This is very unlikely in reality especially if your indexes are correct. It takes a indefinite loop to simulate the example. So, I wont categorize it as a 'Simple/common' scenario.

    Btw, at isolation level serializable, the same example it doesn't get into a deadlock as serialzable is lot more pessimistic when it comes to taking Range locks on indexes.

  • Thanks for the additional clarification, Raj.

  • Hi,

    I captured a deadlock graph in an attempt to resolve a deadlocking issue, but I don't see the resources on the graph, I only see the two oval shapes representing the two SPIDS, with the victim having a big "X" on it. Why is that?

    Thanks

  • Without the graph its really hard to say. Save the deadlock xml as a text file and attach it to your response and it will be easier to help you out.

    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog | Twitter | MVP Profile
    Training | Consulting | Become a SQLskills Insider
    Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]

Viewing 15 posts - 31 through 45 (of 47 total)

You must be logged in to reply to this topic. Login to reply