Too much time to commit the transaction

  • Hmmm. I like Jeff's idea. I also like Emmitt's idea. Is this database starting fresh? Are the triggers truly needed for this data load?

    If the triggers are not needed, I would disable them until done with the data load and then enable them after.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • You can do one thing, remove the indexes before you start the operation and once done apply them back. It will be much faster.

  • CirquedeSQLeil (2/17/2010)


    Hmmm. I like Jeff's idea. I also like Emmitt's idea. Is this database starting fresh? Are the triggers truly needed for this data load?

    If the triggers are not needed, I would disable them until done with the data load and then enable them after.

    Even if the triggers aren't necessary, I want to see them because so many folks write RBAR triggers by mistake.

    This would appear to be a simple transformation from one table to another. Total time to do 10 million rows should be something less than 2 minutes and probably much closer to something under a minute but I can't tell for sure... I (we) need to see all the code, the table definitions, indexes, triggers, and maybe even some sample data. Without all that, we're only guessing (except for the idea of applying indexes only after we're done).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Anam Verma (2/16/2010)


    hi Jason

    I am using T-SQL for data transformation. I think i have confused everyone on the forum.

    Let me rephrase what i am doing. I have a source table with 10 million records. i wrote a t-sql script to transform, translate and modify the source data which is then finally inserted into destination table. Now this is all done in a loop incremented by one, yes one where a single record is fetched, data modification done and then inserted and finally commited. This way loop runs 10 million times to insert all the records into destination table.

    The reason behind running it in a loop incremented by single record is that there is a trigger defined on destination table. the approach of running a loop was considered the best one after readings on this website and keeping in mind the exisiting business requirements. Lets not get off track from why i am inserting record by record.

    i am only concerned at the moment is, is it normal for SQL server to take 12 milliseconds for committing a single record at the time of insertion to a table (either empty or with 10 million rows)

    Steve:

    Your response of indexes did strike me and that's what i have put in my initial post but i was not confident and thought to confirm the masters of databases

    The reason why you keep seeing everyone harp on trying to do it on more than one row at a time is that it essentially takes the same amount of time to insert 1 record as it does 500, or even 5,000 records in many cases. The pattern is not linear, so forcing the server to run row by row is essentially a good way to guarantee the worse possible perf from SQL Server.

    Like Jeff said - take the extra time to post what you're planning to do, and see what kinds or recommnedations you might get. I think you'll be amazed at what you will get out of it.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Jeff Moden (2/17/2010)


    ...

    Even if the triggers aren't necessary, I want to see them because so many folks write RBAR triggers by mistake.

    This would appear to be a simple transformation from one table to another. Total time to do 10 million rows should be something less than 2 minutes and probably much closer to something under a minute but I can't tell for sure... I (we) need to see all the code, the table definitions, indexes, triggers, and maybe even some sample data. Without all that, we're only guessing (except for the idea of applying indexes only after we're done).

    You're right. This is essential information and will have an impact on the outcome or proposed solution.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

Viewing 5 posts - 16 through 19 (of 19 total)

You must be logged in to reply to this topic. Login to reply