An interesting thing about isnull

  • Paul White NZ (3/31/2010)


    CirquedeSQLeil (3/31/2010)


    Adam Machanic summarizes his testing here.

    In the comments, Adam notes that there seems to be no difference in SQL Server 2008...

    My testing shows that any application of the two in which I would use them would be negligible in SQL2k5 and 2000.

    I guess the end result being - it depends.

    And since 2008 is where people should be moving - mute point?

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Paul White NZ (3/31/2010)


    Hugo Kornelis (3/31/2010)


    I've got to correct you AND myself now.

    Did you read the comments in Alex's blog entry (including mine) or the previous article?

    My point there, as here, is that page splits can cause rows to be read twice or not at all.

    I felt sure that you and I had discussed this effect before.

    Alex's previous article demonstrates COUNT(*) totals going awry, and IIRC Tony R's blog entry is about roughly the same thing, but I am not a fan 🙂

    Please do read the comments on Alex K's blog, because I don't want you to think I am making this up as I go along...;-)

    I'm sorry that I keep arguing, but I just can't let this rest...

    I must admit that I did not read the comments on Alex' bllog entry. I looked at the blog post, recognised it as one I have seen when it was published and recalled what it was about, then went back here to respond. I have now read the comments as well. And just for the record, I have never thought you are making things up, though I still think you don't understand exactly what is happening in the various cases.

    Even though, after my initial "never", I now have found that there are indeed cases where even under read committed a row can be skipped or read twice, I have to maintain that this can NOT be caused by page splits. Allow me to explain (almost everyone except Paul should probably skip the next part; it's pretty deep...)

    SQL Server has two ways of scanning a table or index - using the pointers to process all leaf pages in the "logical" order, or using the IAM (Index Allocation Map) to process pages assigned to the table or index in the order in which they happen to be layed out on disk. The latter version is indeed susceptible to reading rows twice or skipping them, if page splits happen during the read. That's because they can move data from a page that has already been read to a page that has not yet been read, or vice versa. When data is read by following the pointer chain, a page split doesn't change anything - as part of the page split process, the next page/prev page pointer chain is updated so that the "oold" page is replaced by two "new pages" in the original "logical" location.

    However, SQL Server will only use an IAM scan to read data if two requirements are both met. First, the query execution plan must ask for an *unordered* scan (which means: the data doesn't have to be ordered, but no problem if it happens to be) - for an ordered scan, only following the pointers to read data in logical order can be used. The second requirement is that there must be either a table lock or no locks used in the query. Now, with a table lock you can get no page splits, because they can only be caused by modifications, which are precluded by the table lock. Which leaves only the no locks option available for this scenario - that is, either a NOLOCK hint or a READ_UNCOMMITTED isolation level.

    The issues that Tony point out in his blog, and the one pointed out by Alex in the blog with all the C# code are not caused by page splits, but by a different issue. Taking Alex' post as an example, what happens is that the COUNT(*) will get the fastest performance by reading the smallest index, which is the index on the AssignedTo column. And the concurrent connection happens to be updating exactly that column. So what happens here, is that a row is read for the COUNT(*), and then the update causes to the indexed column causes it to be relocated to a different page in the same index (which might or might not cause a page split - that is irrelevant here) which is at that time not yet read. And once the scan for the COUNT(*) gets there, it waits for the locks to be released, then reads the page where that row has been added in the mean time, so that it has now been read twice. (And of course, a similar scenario where the row is moved in a different direction would cause the row to be skipped). It's a bit like trying to count children in toddlers' school - you put them in a line, then start counting heads from left to right, but since they can't stay still long enough, you are bound to count some twice and skip others.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Hugo Kornelis (3/31/2010)


    my gut feeling is that, if the same data is used multiple times in a query, it should use the same value, regardless of concurrent modifications.

    (And, of course, if the reference is only actually used once in the query and some internal process of SQL Server duplicates it somewhere on the road from query to execution plan, having the result changed between the first and second execution is only worse).

    Which is why I submitted a bug report: https://connect.microsoft.com/SQLServer/feedback/details/546437/coalesce-subquery-1-may-return-null. The bug reported is a case I constructed, based on this discussion, where COALESCE(..., 1) may erroneously return NULL (which should of course never happen).

    For those who are unwilling to go to the Connect site, here is a copy of the repro steps.

    First, set up the required table as follows:

    CREATE TABLE Demo

    (PrimKey int NOT NULL,

    SomeCol int NOT NULL DEFAULT (0),

    Nullable int NULL,

    Filler char(200) NOT NULL DEFAULT ('Rubbish'),

    PRIMARY KEY (PrimKey)

    );

    go

    -- Quick and dirty way to create 10,000 rows of test data

    WITH ManyRows AS

    (SELECT ROW_NUMBER() OVER (ORDER BY a.object_id, b.object_id) AS rn

    FROM sys.objects AS a

    CROSS JOIN sys.objects AS b)

    INSERT INTO Demo (PrimKey)

    SELECT rn

    FROM ManyRows

    WHERE rn BETWEEN 1 AND 10000;

    go

    -- Change SomeCol value in one row

    UPDATE Demo

    SET SomeCol = 1

    WHERE PrimKey = 1234;

    go

    Next, open two connections. In one, paste the code below and run - it will run forever, flipping the Nullable column between 1 and NULL as fast as it can:

    WHILE 1 = 1

    BEGIN;

    UPDATE Demo

    SET Nullable = NULLIF(1, Nullable)

    WHERE PrimKey = 1234;

    END;

    In the second connection, paste and execute the code below:

    DECLARE @Good int, @Bad int;

    SET @Good = 0;

    SET @Bad = 0;

    WHILE @Good + @Bad < 1000

    BEGIN;

    IF COALESCE((SELECT Nullable

    FROM Demo

    WHERE SomeCol = 1), 1) IS NULL

    BEGIN;

    SET @Bad = @Bad + 1;

    END;

    ELSE

    BEGIN;

    SET @Good = @Good + 1;

    END;

    END;

    SELECT @Good AS Good, @Bad AS Bad;

    On my system, the Bad count actually exceeded the Good count...

    Votes and repro confirmations of the bug are, of course, welcome.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Thanks Hugo for providing that test case to demonstrate the bug.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Interesting discussion. I suspect there are more than a few questions to be built from it. 🙂

  • Hugo Kornelis (3/31/2010)


    I'm sorry that I keep arguing, but I just can't let this rest...

    Hey, no worries...I do not see this as arguing - it is a fascinating debate! It has also been a good 'refresher' for me, since I haven't looked at this issue in depth for over a year now. Late reply due to time zone issues.

    I have to maintain that this can NOT be caused by page splits.

    I agree. I was mis-remembering the cause here. Page splitting is one mechanism that can cause problems at READ UNCOMMITTED, as you rightly point out. Please forgive my memory lapse there - page splits were not relevant to my point. It turns out that the whole issue is partly described in Books Online: Concurrency Effects. The relevant extracts are:

    Page Splits

    "When you are using READ UNCOMMITTED, if your query reads rows using an allocation order scan (using IAM pages), you might miss rows if another transaction is causing a page split. This cannot occur when you are using read committed because a table lock is held during a page split and does not happen if the table does not have a clustered index, because updates do not cause page splits."

    Missed/duplicate reads at higher isolation levels

    "Transactions that are running at the READ COMMITTED level do issue shared locks, but the row or page locks are released after the row is read. In either case, when you are scanning an index, if another user changes the index key column of the row during your read, the row might appear again if the key change moved the row to a position ahead of your scan. Similarly, the row might not appear if the key change moved the row to a position in the index that you had already read. To avoid this, use the SERIALIZABLE or HOLDLOCK hint, or row versioning."

    Craig Freedman has an excellent explanation of the latter effect here: http://blogs.msdn.com/craigfr/archive/2007/04/25/read-committed-isolation-level.aspx for anyone reading along that is interested.

    What neither of those state explicitly, is that the same effect occurs at REPEATABLE READ - it is easy to verify that though, using a test based on Craig's reproduction script.

    On the subject of IAM-driven scans. You are, of course, absolutely right about the conditions for an IAM scan on a table with a clustered index (ordered:false, plus NOLOCK or TABLOCK), but it occurs to me that heaps introduce an extra possibility: A row that has been read by an IAM scan can be updated by a concurrent process, such that the row moves (leaving behind a forwarding pointer) to a later page that the original scan has not reached yet. I imagine it is possible to miss a row via the same mechanism, in reverse, but I haven't tested either of these. I just mention it as a point of interest, really.

    The issues that Tony point out in his blog, and the one pointed out by Alex in the blog with all the C# code are not caused by page splits, but by a different issue.

    I knew I had seen the point proven by Alex at some stage - it is a pity I originally referenced the wrong article, and mis-remembered the cause (page splits). I have also just realised that this covers my points above, but I am leaving them in, because it seems a shame to delete all that nice formatting and references 😉

    Anyhow...my flawed powers of recall aside, can we now agree that rows can be read twice or not at all under READ COMMITTED and REPEATABLE READ - regardless of heaps/ordered scans? I do apologise that it took so many posts to get the causes nailed down, but the discussion seems to have been of interest to some, so it's not all bad news 🙂

    One final thing. We started off talking about common sub-expressions being evaluated more than once by CASE. That man Craig Freedman has another entry that describes the issue, and presents a work-around: http://blogs.msdn.com/craigfr/archive/2008/06/27/subqueries-in-between-and-case-statements.aspx. I don't think it helps with COALESCE directly, but it is very interesting nonetheless.

    Thanks for a most enjoyable discussion!

    Paul

  • Paul White NZ (4/1/2010)


    On the subject of IAM-driven scans. You are, of course, absolutely right about the conditions for an IAM scan on a table with a clustered index (ordered:false, plus NOLOCK or TABLOCK), but it occurs to me that heaps introduce an extra possibility: A row that has been read by an IAM scan can be updated by a concurrent process, such that the row moves (leaving behind a forwarding pointer) to a later page that the original scan has not reached yet. I imagine it is possible to miss a row via the same mechanism, in reverse, but I haven't tested either of these. I just mention it as a point of interest, really.

    No, that won't happen. This is because of how a heap scan treats forwarding pointers. The rules are simple:

    1. when a forwarding pointer is encountered, the pointer is followed IMMEDIATELY and the forwarded row is then read

    2. when a forwarded row is encountered, it is skipped (becuase it either has already been read from the forwarding pointer, or will be read if the page holding the forwarding pointer still has to be scanned).

    So if a row that has already been read is forwarded to a page that has not yet been read, it won't be read again - it's marked as a forwarded row and will be skipped. And in the reverse case, if a row that has not yet been read is forwarded to a page that has already been processed, it will be read once the forwarding pointer is encountered.

    And so, I finally get a question to an answer that I have asked myself since 2006. I finally know why Microsoft chose to implement "the table scan from hell".

    Anyhow...my flawed powers of recall aside, can we now agree that rows can be read twice or not at all under READ COMMITTED and REPEATABLE READ - regardless of heaps/ordered scans?

    Yes, we can. Didn't I already say so in one of my previous posts?

    One final thing. We started off talking about common sub-expressions being evaluated more than once by CASE. That man Craig Freedman has another entry that describes the issue, and presents a work-around: http://blogs.msdn.com/craigfr/archive/2008/06/27/subqueries-in-between-and-case-statements.aspx. I don't think it helps with COALESCE directly, but it is very interesting nonetheless.

    Thanks for the link - anything Craig Freedman writes is always worth reading!


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Hugo Kornelis (3/31/2010)


    ...I submitted a bug report... a case I constructed, based on this discussion, where COALESCE(..., 1) may erroneously return NULL (which should of course never happen).

    Voted, repro'd, and a sort-of 'workaround' added.

    The optimizer seems to break its own rules here - using the two expressions as if they were interchangeable in the final Compute Scalar, and in the Pass Through expression on the second left join too.

    Correct behaviour can be restored by ensuring column references are used instead of expressions:

    IF COALESCE((SELECT TOP (1) Nullable

    FROM Demo

    WHERE SomeCol = 1), 1) IS NULL

  • Hugo Kornelis (4/1/2010)


    No, that won't happen. This is because of how a heap scan treats forwarding pointers.

    D'oh! Of course, of course, of course ...thanks!

    Yes, we can. Didn't I already say so in one of my previous posts?

    Probably, yes. I just wanted to be sure. Summarizing, I guess.

    Paul

  • Paul White NZ (4/1/2010)


    Hugo Kornelis (3/31/2010)


    ...I submitted a bug report... a case I constructed, based on this discussion, where COALESCE(..., 1) may erroneously return NULL (which should of course never happen).

    Voted, repro'd, and a sort-of 'workaround' added.

    The optimizer seems to break its own rules here - using the two expressions as if they were interchangeable in the final Compute Scalar, and in the Pass Through expression on the second left join too.

    Correct behaviour can be restored by ensuring column references are used instead of expressions:

    IF COALESCE((SELECT TOP (1) Nullable

    FROM Demo

    WHERE SomeCol = 1), 1) IS NULL

    Wow! That is really intriguing....

    Thanks, Paul!


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Hugo Kornelis (4/1/2010)


    Wow! That is really intriguing....

    Exactly my reaction when I first looked at your Connect item! I have added it to my watch list.

    Thanks, Hugo.

  • Paul White NZ (3/31/2010)


    CirquedeSQLeil (3/31/2010)


    Adam Machanic summarizes his testing here.

    In the comments, Adam notes that there seems to be no difference in SQL Server 2008...

    I think that's him expressing himself badly - the figues he gives for SQLS 2008 indicate that coalesce is now measurably faster than isnull.

    Tom

  • Paul White NZ (4/1/2010)


    Hugo Kornelis (3/31/2010)


    ...I submitted a bug report... a case I constructed, based on this discussion, where COALESCE(..., 1) may erroneously return NULL (which should of course never happen).

    Voted, repro'd, and a sort-of 'workaround' added.

    The intersting reult here is that the MS reply recognises that the optimiser should recognise that the duplicated expression is just that, and evaluate it only once. So maybe it will get fixed sometime.

    It would be good if that were to happen for case statements too, but given that the ANSI definition syas the simple version is a shorthand for the version duplicating the expression I guess it won't. The workaround in the reply works for case too, of course.

    Tom

  • Tom.Thomson (4/7/2010)


    Paul White NZ (3/31/2010)


    CirquedeSQLeil (3/31/2010)


    Adam Machanic summarizes his testing here.

    In the comments, Adam notes that there seems to be no difference in SQL Server 2008...

    I think that's him expressing himself badly - the figues he gives for SQLS 2008 indicate that coalesce is now measurably faster than isnull.

    Did it? My memory was of mixed results. Can't actually be bothered to check again though 😀

  • Tom.Thomson (4/7/2010)


    The intersting reult here is that the MS reply recognises that the optimiser should recognise that the duplicated expression is just that, and evaluate it only once. So maybe it will get fixed sometime.

    I think it is important to say that this only applies to the specific expansion of COALESCE.

    I hope it does get fixed, but I don't expect for one moment that they will change the way COALESCE is implemented (with CASE). Will be an interesting one to watch.

    It would be good if that were to happen for case statements too, but given that the ANSI definition syas the simple version is a shorthand for the version duplicating the expression I guess it won't. The workaround in the reply works for case too, of course.

    I agree - they won't change the ANSI-esque expansion.

Viewing 15 posts - 31 through 44 (of 44 total)

You must be logged in to reply to this topic. Login to reply