Could not continue scan with NOLOCK due to data movement

  • We have an asp.net/c# web application running at a client.

    Just recently we rolled out some changes, one of which was a mod to a sql procedure that pulls the clients Orders from a sql table and returns to an aspx page.

    Normally no problems until I added a Left Outer Join. Now they get this often :

    Could not continue scan with NOLOCK due to data movement

    We had to roll back my changes by the way until I find a better solution.

    Stored proc code :

    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

    Select

    buy_sell=

    CASE when ord.buy_sell=1 then 'Buy'

    when ord.buy_sell=2 then 'Sell'

    END,

    ord.amount,

    ord.amt_unit,

    <more columns here..>

    s.ticket as tick_strat,

    s.dependent as dep,

    s.sort_string as sort,

    into #temp

    FROM filter_list flst, users u, order_fx ord

    LEFT OUTER JOIN order_strategy s

    ON s.ticket = ord.ticket

    WHERE

    status not in ('EXECUTED','FILLED')

    AND u.client = @client

    AND ord.client = @client

    order_strategy table is getting a lot of activity on it. For example a sql job regularly deletes the records and rebuilds the table (not the actual table structure, just the rows).

    This must be causing the problem along with the SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED .

    Any ideas.

    Thank you !!!

    Bob

  • This is expected. It happens under these circumstances:

    From: http://support.microsoft.com/kb/815008

    This behavior may occur when a row in a table is deleted between the time SQL Server reads the location of the row from an index and the time SQL Server fetches the row.

    Short version, long story, stop deleting things out while other things are using them. How often do you reset the lookup table? Another option is to force it to go to the clustered index so you don't have this glitch between the NC-Index seek and the keylookup on the one or two queries you're having problems with. I don't recommend this solution permanently but it'll get you up and running while you DO fix it...

    ... which involves redesigning how you're using the lookup table and/or changing your consistency and isolation levels.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Hi Craig - Thanks for your response.

    The bad news is that my order_strategy table gets rebuilt by a Sql job every couple of minutes.

    The good news is that I can remove the order_strategy join from my stored proc query quite easily, and attack my problem from a different angle.

    On the client's QC boxes, it appeared that the outer join - LEFT OUTER JOIN order_strategy s - was NOT causing any problems; and that by using Read Uncommitted Sql Server wouldn't be concerned with the Deleted records on order_strategy table. Boy was I wrong !

    At the same time, it's odd how the problem truly revealed itself only AFTER the changes went in production - at which time all the users logged on to perform their work. In the QC environment there were only a few users testing.

    In the end I would rather NOT go with the quick-fix, as you also suggested.

    If there was a way to keep my order_strategy table without deleting it every few minutes, perhaps that would be a better solution...

    Regards,

    Bob

  • I personally am not sure why you're completely re-building order_strategy every few minutes, but if you'd like to expand on the logic there, I'm sure one of us will be able to help you come up with some alternate solutions.

    Another option would be to fast load what you need into a #tmp table before trying to use it in your existing procs, but again, my personal opinion is that's a workaround while you get it fixed.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Yes, good point Craig. I am not crazy about this idea of rebuiling order_strategy each time either. I just found that if I don't clear out those records (and there's usually less than 40 or 50), I may have some stragglers hanging around in there and it throws the display off a bit.

    i.e. order_strategy forces certain type of parent/child orders to "stick" together on the main order board. The parent is the "00000" record, and the sort_string allows child orders to be grouped with the parent (regardless of its ticket sort).

    Ticket Dependent Sort_String

    844986 000000 C 0.0999 WEB844985844986

    844987 844986 C 0.0999 WEB844985844986844987844987

    844988 844986 C 0.0999 WEB844985844986844987844988

    And yes, smarter logic would probably solve this.

    I am in fact using a #temp table, but I'm also using a TRUNCATE prior to my final insert. I think the Truncate is causing some issues where my other procedure has a READ UNCOMMITTED setting.

    This is the final three lines of my proc:

    truncate table order_strategy

    insert into order_strategy select * from #tmp_ord_strat

    drop table #tmp_ord_strat

    I also tried :

    delete from order_strategy where ticket not in (select b.ticket from #tmp_ord_strat b)

    but it's commented out with a note that it may be causing problems.

    thanks for your time again...

    Bob

  • Bob,

    You may want to check out using a synonym in your data retrieval queries and have 2 tables for order strategy. I've used it successfully on a couple of occasions for this sort of thing, but not in a real rapid-fire environment.

    You can't ALTER a synonym, you have to drop and re-create it, which could have a concurrency problem if some data retrieval occurs on another SPID between the DROP and CREATE statements. What I did to prevent this is create a little semaphore table that procedures would read before issuing the read query, and if I was about to switch the synonym (where I would set the semaphore to true before doing the switch), the procedure would wait a very short time before using the synonym in a query, which would give me time to DROP and CREATE it on the other table.

    The sequence was:

    1. Build the alternate table.

    2. Set the semaphore to true.

    3. Drop and re-create the synonym.

    4. Clear the semaphore

    Todd Fifield

  • Is the OrderStrategy table session-specific?

    Kind of like a temp table, but multiple processes access it from multiple connections?

    If so, there are easy solutions for that, which don't involve truncate or complex delete processes, and which allow for MUCH better concurrency and performance with MUCH fewer errors and glitches.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • @tfifield - I'm not familiar with this approach, but I will certainly have a further look at synonyms, etc.

    Thanks for your response !

    Bob

  • tfifield (3/16/2011)


    Bob,

    You may want to check out using a synonym in your data retrieval queries and have 2 tables for order strategy. I've used it successfully on a couple of occasions for this sort of thing, but not in a real rapid-fire environment.

    You can't ALTER a synonym, you have to drop and re-create it, which could have a concurrency problem if some data retrieval occurs on another SPID between the DROP and CREATE statements. What I did to prevent this is create a little semaphore table that procedures would read before issuing the read query, and if I was about to switch the synonym (where I would set the semaphore to true before doing the switch), the procedure would wait a very short time before using the synonym in a query, which would give me time to DROP and CREATE it on the other table.

    The sequence was:

    1. Build the alternate table.

    2. Set the semaphore to true.

    3. Drop and re-create the synonym.

    4. Clear the semaphore

    Todd Fifield

    What you're describing is essentially a "mutex" (mutual exclusion). If you're going to use one of those, you can skip the whole synonyms thing entirely and use it control access to the table.

    There's almost certainly a simpler process than trying to build thread safety in T-SQL. It can be done, but it's a serious concurrency-killer, and that can have major bad effects on user-experience.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Am I missing something here?

    The fundamental problem is running at READ UNCOMMITTED. No READ UNCOMMITTED, no problem.

    If you're getting this error, you a reading uncommitted transactions.

    Also, a thought:

    FROM filter_list flst, users u, order_fx ord

    LEFT OUTER JOIN order_strategy s

    ON s.ticket = ord.ticket

    You are mixing old and new syntax. That isn't a good idea. In this case, it looks like (unless you have eliminated some of the WHERE clause when you posted), you are performing a cross join on filter_list, users, and order_fx, then OUTER JOINing that to order_strategy. This seems likely to be producing many more rows in your result set than you intend.

  • Thank you for your response.

    I have been considering changing the READ UNCOMMITTED TO READ COMMITTED (which I understand to be the Sql Server default).

    I only put the UNCOMMITTED in this proc below with the assumption that it would never get locked up in the event some other process was locking/updating one or more records. i.e. I'm not concerned if a dirty read occurs.

    I can confirm that I am NOT getting additional records that I do not expect. I can call this proc manually via a sql qry window and the record set look fine.

    Here's the full WHERE clause in my proc :

    SELECT

    DISTINCT

    buy_sell=

    CASE when ord.buy_sell=1 then 'Buy'

    when ord.buy_sell=2 then 'Sell'

    END,

    ord.amount,

    ord.amt_unit,

    ord_stat = dbo.Calc_Ord_Color2('sort', ...,

    dbo.get_mkt_rate(ord.cur_1,ord.cur_2,ord.order_type, ord.buy_cur, @fxo_bidask) ),

    <more columns here..>

    into #temp

    FROM order_fx ord, filter_list flst, users u

    WHERE

    (ord.status = @status or

    (@status = 'Accepted' and -- these are active orders

    (ord.status = 'Modified' or

    ord.status like '%- Request' or

    ord.status like '%- Pending' or

    ord.status like '%- Rejected'))

    OR

    (@status = 'Pending' and

    (ord.status like '%- Pending%' or

    ord.status like '%- Request') and

    ord.status <> 'Modify - Request')

    )

    AND (flst.lcCurrency ='ALL' OR (flst.lcCurrency = 'BYCURR'

    AND (ord.buy_cur=flst.Bycurr OR ord.sel_cur=flst.ByCurr) )

    OR (flst.lcCurrency='BYPAIR' and ord.cur_2 = flst.ByPair2 and ord.cur_1 = flst.Bypair1))

    AND (flst.lctrader = 'ALL' OR (flst.lctrader = 'BYTRAD' AND ord.user_id = flst.ByTrad) OR ord.user_id='WEB' )

    AND flst.filter_id = @filter_id

    AND (ord.user_id = u.user_id OR ord.user_id='WEB')

    AND u.client = @client

    AND ord.client = @client

  • :blink:

    That is some complex join logic you've got going there.

    Let's get back to the original design flaw, a table that's getting dumped and rebuilt every few minutes. Under what circumstances does this table rebuild itself?


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • GSquared (3/16/2011)


    What you're describing is essentially a "mutex" (mutual exclusion). If you're going to use one of those, you can skip the whole synonyms thing entirely and use it control access to the table.

    There's almost certainly a simpler process than trying to build thread safety in T-SQL. It can be done, but it's a serious concurrency-killer, and that can have major bad effects on user-experience.

    Gus,

    As I said I never tried this in a really rapid-fire type environment. I probably wouldn't use it if there was a possibility of another process finding the mutex cleared just before I set it and issuing the read statement just after dropping the synonym but before I re-created it. (This is the only place it could fail I think)

    I am familiar with mutexes. I used to write device drivers in PDP-11 assembly language.

    Todd Fifield

  • bob mazzo (3/17/2011)


    Thank you for your response.

    I have been considering changing the READ UNCOMMITTED TO READ COMMITTED (which I understand to be the Sql Server default).

    I only put the UNCOMMITTED in this proc below with the assumption that it would never get locked up in the event some other process was locking/updating one or more records. i.e. I'm not concerned if a dirty read occurs.

    The problem is that you ARE getting a dirty read, and that's what is causing the error. You are correct, READ COMMITTED is the default isolation level, and this error does not occur at that level.

    If you must read and you are having trouble with concurrency (ie you get excessive blocking) you might want to consider READPAST - READPAST will, as it says, read past locked rows, however if other transactions are holding large numbers of locks this may not work for you.

  • Hi Craig,

    It's a sql job that schedules the order_strategy table rebuild.

    i.e. order_strategy forces certain type of parent/child orders to "stick" together on the main order board. The parent is the "00000" record, and the sort_string allows child orders to be grouped with the parent (regardless of its ticket sort).

    Ticket Dependent Sort_String

    844986 000000 C 0.0999 WEB844985844986

    844987 844986 C 0.0999 WEB844985844986844987844987

    844988 844986 C 0.0999 WEB844985844986844987844988

    I'm looking at immediately changing the TRUNCATE to DELETE FROM Order_Strategy, and also wrapping the Delete around a transaction. Then changing up my main stored proc to have a Read Committed (i.e. the stored proc that reads from Orders and sends them back to an aspx page).

    And I also have this small CASE embedded in the main SQL statement which looks at Order_strategy fields (see the S alias where I appaned the '(d)' for a dependent order):

    TICKET=

    CASE

    WHEN ord.tick_lnk2 !=0 THEN LTRIM(STR(ord.ticket))+ '(o)'

    WHEN ord.sub_type = 'LOOP' THEN LTRIM(STR(ord.ticket))+ '(l)'

    WHEN ord.tick_lnk1 !=0 and ord.sub_type !='LOOP' THEN LTRIM(STR(ord.ticket)) + '(d)'

    WHEN ord.tick_lnk1 =0 and s.ticket is not NULL THEN LTRIM(STR(ord.ticket)) + '(d)'

    ELSE LTRIM(STR(ord.ticket))

    END,

    thanks again.

    Bob

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply