What happens with uncommitted transactions?

  • So last night, I accidentally started a query containing a transaction on a VERY active database, but forgot to commit it (although I could've sworn I did). I woke up this morning, tried running a simple SELECT statement & noticed it hung. I quickly realized my uncommitted transaction was the culprit, so I went in and committed it.

    Noticing the SELECT statement I tried running in the morning had hung due to this mistake, I became worried other automated processes my company runs overnight, may have hung as well. To my surprise though, everything ran smoothly. Nothing else hung.

    So my question is, when I don't commit a transaction, what & who exactly does it affect? The SELECT query I ran (just from a blank query window in management studio) hung on me. Yet our automated processes didn't. Is it only the user account of the one who made the mistake, who gets affected?

    Thanks.

  • It depends upon whether the data that you had locked was required by any overnight processes and what isolation levels were in use by those processes.

    The data you hadn't committed was probably locked (I don't know what your transaction was doing), but if the other transactions use (heaven forbid) NOLOCK or suchlike then they wouldn't be blocked. Snapshot Isolation could be another reason, if it used there.

    Steve Hall
    Linkedin
    Blog Site

  • First, let's clear this up:

    Is it only the user account of the one who made the [transaction], who gets [blocked]?

    Absolutely not. SQL doesn't care at all about who did it, only about what data needs protected.

    As background, SQL guarantees that data in a transaction will be applied all-or-none and that transactional data cannot be "lost". In order to do this, SQL must lock data that has been modified. Locking data means that it cannot be used or seen by anyone else until the lock is released.

    Since that can block out other people trying to get data -- as you discovered yourself -- SQL is careful to lock only what it has to for only as long as it has to.

    OK, with that background, you can consider what the transaction you started but didn't complete did. Let's say, for example, that it updated a single column of a single table. Then SQL would lock that row of data. It doesn't lock the whole table, just that row.

    But, if the next day you come along and write "SELECT * FROM that_table", when SQL gets to the row that was updated, it won't be able to read it, since that row is still locked. So it would wait. Eventually the timeouts built into SQL to prevent tasks from waiting forever would kick in and SQL would just cancel your request to read the table.

    Note, thought, that if you updated id #3, then a request such as "SELECT * FROM that_table WHERE id = 4" would generally work just fine (although it might not, depending on details of how the table is indexed, or not). Note, too, that reads or even updates to other tables would not be blocked by a lock(s) on a different table.

    Thus, any processing that did not require the row(s) you updated to be read could very well complete normally. Yet when you try to list that data the next day, you get blocked.

    Inserts and deletes also place locks, with necessary differences for the specific thing they're doing versus just changing an existing row(s).

    In conclusion, naturally that is a simplified overview. And the reader can tell SQL "show me data even it's locked, and if that causes bad data, I don't care" (typically done using a "WITH (NOLOCK)"). But, by default, SQL doesn't work that way, and always has to wait for locks to resolve before accessing modified data.

    SQL DBA,SQL Server MVP(07, 08, 09) "Money can't buy you happiness." Maybe so, but it can make your unhappiness a LOT more comfortable!

  • ScottPletcher (11/20/2015)


    Eventually the timeouts built into SQL to prevent tasks from waiting forever would kick in and SQL would just cancel your request to read the table.

    Not unless the client has set a timeout. SQL doesn't have query timeouts built in. Management studio has an infinite timeout. If I run the setup you described in Management Studio, I could leave those sessions for a year and the one waiting will still be waiting.

    Deadlocks are what SQL rolls back automatically. Blocks it's quite happy to wait forever.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Scott, that is an awesome explanation.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • One thing not mentioned yet, is that an uncommitted(open) transaction will also keep the transaction log from being truncated (marked for re-use) so the transaction log will continue to grow as new transactions are written to it until either the uncommitted transaction is committed or rolled back, the transaction log runs out of space, or the sql server service is restarted.

    Jack Corbett
    Consultant - Straight Path Solutions
    Check out these links on how to get faster and more accurate answers:
    Forum Etiquette: How to post data/code on a forum to get the best help
    Need an Answer? Actually, No ... You Need a Question

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply