Locking problem

  • We are doing some initial testing for an application on SQL 2K originally written against a 7.0 server. During one segment of testing, the app hung during an insert and finally timed out. Under the covers, what was happenening was that an insert was done on one table (the parent), then a SELECT max () is done to get a value for the child table and then the insert statement is issued to the child table. Because of the way the app is coded, a second connection is opened (unintentionally, by the way) to do the SELECT max and this sid is locking the resources needed by the insert statement that follows. This same code works on SQL 7.0 so I am trying to find out what is different between the two. Why does SQL 2K lock the resources while SQL 7.0 does not? Any one run into anything like this??

    Mark

    Mark Hottinger

    Mark

  • It almost sounds like there are triggers in addition to the routines you describe. Any chance of that?

    Guarddata-

  • Any transactions involved?

    What is the TRANSACTION ISOLATION LEVEL setting?

  • hope you performed the sp_statistics after the restore/load in sql2000. Do you perform the "SELECT max ()" using an index ?

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Certainly sounds like you have you transaction isolation level set to repeatable read or serializable, and a transaction isn't being closed - so that the 'select' statement is holding its lock(s) indefinitely.

    You can check this with DBCC USEROPTIONS - which will show you the value if it has been explicitly set. If not, SQL 8 defaults to read committed, like SQL 7. You need to query this from within the application, though - it is a per-connection setting.

    I assume the extra 'select' is now issued from the same connection as the other statements in the transaction - and that that has solved the problem?

    Tim Wilkinson

    "If it doesn't work in practice, you're using the wrong theory"
    - Immanuel Kant

  • Could this be anything to do with implicit transactions? Maybe SQL 7 and 8 behave differently in this respect....

    Tim Wilkinson

    "If it doesn't work in practice, you're using the wrong theory"
    - Immanuel Kant

  • This is all happening inside one explicit transaction: Begin Tran, Insert into Table1 (on connection1), Select max() from table1 (on connection2) then insert into table2 (on connection1). And yes, there are triggers involved. Triggers on Table2 could actually need to go back and update Table1. The developers tell me that they are not explicitly setting the Transaction Isolation level so my belief is that it is the default (Read Committed). I have asked the developers to run the DBCC Useroptions command from inside the app to confirm that however. But, as far as I know, Read Committed was also the default in SQL 7 so it is still unclear why this is failing on SQL 2K. I did run the sp_updatestats on this database and the select max() is using the primary key for the table (but it is still a table scan). For code change options, I think the best would be to change the select max() to select @@Identity, but this would take to most work and testing. Would it be bad to explicitly set the transaction isolation level to Read Uncommitted? My understanding is this only affects select statements. But I am also not sure we would want to do that.

    Mark Hottinger

    Mark

  • If your select max() is in a separate connection it cannot see the uncommitted stuff from your connection1-transaction.

    Run profiler to see what connection-isolation-level is being used ! (my guess would be repeatable read if you did not specify it and are using ADO!).

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • OK - but the transaction will aplly only to one connection - the second conn. won't have a transaction. The obvious thing is to issue the SELECT statement on the same connection as the other statements. That way you can't possibly block yourself.

    What connectivity are you using to access SQL server? ADO/SQLOLEDB?

    Tim Wilkinson

    "If it doesn't work in practice, you're using the wrong theory"
    - Immanuel Kant

  • alzdba - how do you see the trans. iso. level via Profiler? That would be very convenient...

    Tim Wilkinson

    "If it doesn't work in practice, you're using the wrong theory"
    - Immanuel Kant

  • try saving a trace to table and then

    look for eventclass = 17 or (eventclass = 12 and textdata like '%isolation%')

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • I've found that using the query hint (Nolock) helps, especially when selecting against a table locked by a transaction.

    Signature is NULL

  • I also like the NOLOCK (like Calvin). It depends on whether there might be two processes performing this at the same time or not. If so - the routine really needs to exist in the same transaction or you will end up with details being created for the wrong header.

    Guarddata-

  • Is there any reason why you wouldn't remove the mistake of creating a new connection? That would solve your problem.

    A couple of subsidiary points:

    Using @@identity is more efficient than select max(), but if any triggers on the table do their own inserts into identity columns, @@identity will be overwritten. You can avoid this by gettiong the trigger to write the value into a table (perhaps with a SPID column to permit concurrency), but it's extra hassle.

    Using NOLOCK (or read uncommitted) should be done only with great caution. If anyone else might be inserting into or deleting from the table, you could run into problems.

    Tim Wilkinson

    "If it doesn't work in practice, you're using the wrong theory"
    - Immanuel Kant

  • Thanks Stax68 for clarifying. The NOLOCK should not be used if the value is being changed as part of the update. Single connections are the only safe way to handle any transaction.

    Guarddata-

Viewing 15 posts - 1 through 15 (of 27 total)

You must be logged in to reply to this topic. Login to reply