Transactions 3

  • John Mitchell-245523 (5/30/2012)


    Hugo Kornelis (5/30/2012)


    John Mitchell-245523 (5/30/2012)


    I must admit I'm surprised that having SET XACT_ABORT OFF is the default.

    I am not. I prefer to be able to handle my errors myself; SET XACT_ABORT ON does not allow that; it thrusts one standard error handling upon all code.

    But then why wrap statements in a transaction if you don't want them to succeed as a unit or fail as a unit?

    Because you want them to commit or rollback as a unit. In other words, you want people viewing the data to either see all or none of the modifications. Transactions are not an error-handling mechanism; they're a data consistency mechanism.

    My point is that the default setting means that Atomicity (A transaction must be an atomic unit of work; either all of its data modifications are performed, or none of them is performed) is not honoured.

    I disagree. Atomicity is honored in this case -- because there are only two valid data modifications in the transaction. The invalid INSERT in this case does not make it to the level of a data modification because it errors out before data is modified.

  • Transactions are not an error-handling mechanism; they're a data consistency mechanism.

    I think that pretty much summarises this interesting discussion.

  • OK, thanks for all the replies. Plenty to think about here.

    Hugo Kornelis (5/30/2012)


    The keywords in this statement are "all of its data modifications". I interpret this as "everything that is specified, after catering for the effect of possible errors and the specified error handling", and in that case the code snippet in the QotD conforms. But you seem to interpret this as "everything that is specified, period - any error invalidates the transaction". In my opinion, the latter is just one of several ways to handle errors, and the developer should explicitly specify if that is the required handling; your mileage (obviosuly) varies.

    You're right - that's exactly how I was interpreting it. Not being a developer, I hadn't considered the other options, so thank you for clarifying that.

    (BTW, please don't feel I am picking on you. You are representing a notion that many people have, as I saw from previous discussions on this subject, and I am adressing everyone through you.)

    Yes, it is easy to feel flamed when you find yourself proffering views that aren't agreed with by the wider community. I'm reassured by your comment.

    L' Eomot Inversé (5/30/2012)


    Of course you can look inside transactions and violate the atomicity principle when your isolation level is neither snapshot nor serializable, but that's nothing to do with XACT_ABORT or error handling, it's to do with deliberately violating atomicity by using non-isolating isolation levels, playing with fire to gain some performance, just the sort of thing you could expect when (some decades ago) everyone building a relational database was determined to show vast throughput on benchmarks which they didn't know how to achieve if they had to provide proper isolation; ANSI could produce a standard that every RDBMS manufacturer would ignore (an utterly pointless excercise) or it could put the compromise into the standard (which is what it did). So yes, SQL does violate the transaction principles - but with its isolation levels and not with its error handling (which, with all its faults, is completely consistent with the transaction principles).

    I understand and accept that. The reason I broadened the debate to include isolation was that I took your earlier comment (rightly or wrongly) as meaning "SQL Server doesn't honour the Isolation principle by default, and therefore we shouldn't necessarily expect it to honour the Atomicity principle". I agree that READ COMMITTED is, for many or most purposes, a good compromise between isolation and concurrency.

    sknox (5/30/2012)


    Because you want them to commit or rollback as a unit. In other words, you want people viewing the data to either see all or none of the modifications.

    Yes indeed. It all turns on whether you consider the success of all statements as being a necessary condition for a transaction to be committed. Like Hugo said, that's only one option.

    John

  • I understand this behaviour, really I do, but it will never be fully intuitive; especially since SQL Server has such bizarre rules concerning which errors do what (terminate the statement, abort the batch, abort the transaction, abort the scope, doom the transaction...blah blah blah).

    To pursue the atomicity thing a bit more...

    DECLARE @T AS TABLE

    (

    col1 integer PRIMARY KEY

    )

    BEGIN TRANSACTION;

    INSERT @T (col1) VALUES (1), (3);

    INSERT @T (col1) VALUES (1), (2), (4);

    COMMIT TRANSACTION;

    SELECT * FROM @T AS t;

    DELETE @T;

    BEGIN TRANSACTION;

    INSERT @T (col1) VALUES (1);

    INSERT @T (col1) VALUES (3);

    INSERT @T (col1) VALUES (1);

    INSERT @T (col1) VALUES (2);

    INSERT @T (col1) VALUES (4);

    COMMIT TRANSACTION;

    SELECT * FROM @T AS t;

  • Wow! Took a while to get my head round that, but when I did, it made sense. An INSERT statement has to succeed or fail as a whole, even when it attempts to insert more than one row. But I imagine the point you're making is that it's a potential banana skin for developers to beware of.

    John

  • SQL Kiwi (5/31/2012)


    I understand this behaviour, really I do, but it will never be fully intuitive; especially since SQL Server has such bizarre rules concerning which errors do what (terminate the statement, abort the batch, abort the transaction, abort the scope, doom the transaction...blah blah blah).

    To pursue the atomicity thing a bit more...

    And here is yet another variation, (using an option that I would not mind being removed from the product)

    DECLARE @T AS TABLE

    (

    col1 integer PRIMARY KEY WITH (IGNORE_DUP_KEY = ON)

    );

    BEGIN TRANSACTION;

    INSERT @T (col1) VALUES (1), (3);

    INSERT @T (col1) VALUES (1), (2), (4);

    COMMIT TRANSACTION;

    SELECT * FROM @T AS t;

    DELETE @T;

    BEGIN TRANSACTION;

    INSERT @T (col1) VALUES (1);

    INSERT @T (col1) VALUES (3);

    INSERT @T (col1) VALUES (1);

    INSERT @T (col1) VALUES (2);

    INSERT @T (col1) VALUES (4);

    COMMIT TRANSACTION;

    SELECT * FROM @T AS t;


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • John Mitchell-245523 (5/31/2012)


    Wow! Took a while to get my head round that, but when I did, it made sense. An INSERT statement has to succeed or fail as a whole, even when it attempts to insert more than one row. But I imagine the point you're making is that it's a potential banana skin for developers to beware of.

    Yep, making the point that I understand why these things seem confusing and/or counter-intuitive. I have heard people say that the reason a statement has to succeed or fail as a whole is that every statement runs inside an implicit transaction...:-D

  • Hugo Kornelis (5/31/2012)


    And here is yet another variation, (using an option that I would not mind being removed from the product)

    Ah, IGNORE_DUP_KEY, a rich vein for QotD questions 🙂

    CREATE TABLE #T

    (

    col1 integer NOT NULL

    );

    INSERT #T (col1) VALUES (1), (1), (2), (3);

    ALTER TABLE #T

    ADD PRIMARY KEY (col1)

    WITH (IGNORE_DUP_KEY = ON);

  • Seems like dejavu to me.

    http://brittcluff.blogspot.com/

  • Good back-to-basics question!

Viewing 10 posts - 31 through 39 (of 39 total)

You must be logged in to reply to this topic. Login to reply