Too much time to commit the transaction

  • I am facing a performance issue on one of the script am running. the script is a stored procedure where a record is inserted into the table with explicit begin and commit transaction statement.

    For your reference, i the insert statement is below

    Begin transaction

    INSERT INTO [DBName].[dbo].[TblName]

    ([ColID] -- FK

    , [Col1ID] -- FK

    , [Col2ID] -- FK

    , [ATTranUsername]

    , [ATTranOnDate]

    , [ATTranOnTypeID]

    , [RECORDID])

    VALUES (@ColID

    , @Col1ID

    , @Col2ID

    , @ATTranUsername

    , @ATTranOnDate

    , @ATTranOnTypeID

    , @RECORDID)

    commit transaction

    in Sql Profiler, i noticed that the Sql Server is taking 12 microseconds to commit the transaction.

    The only reason i can think of at the moment for commit transaction to take this much time is that there are 5 indexes created on this table. 3 non clustered for foreign keys, 1 primary key on identity column and 1 non clustered index on (TblID, ColID, Col1ID, Col2ID). Once fully loaded, the table where am inserting data will have almost 10 million records.

    Please advice, Is it normal for SQL server to take this much time?

    On a note: There is no increase in log file size.

  • You do realize how fast that really is

    A microsecond is an SI unit of time equal to one millionth (10-6) of a second.

    A microsecond is equal to 1000 nanoseconds or 1/1000 millisecond.

    That is pretty fast.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • I agree, this isn't unacceptable at all.

    CEWII

  • Sorry it is 12 Milliseconds instead of 12 microseconds, the default duration setting done in SQL Profiler.

  • That isn't bad either. If it were 12 seconds, or maybe even 1 second then we could start to find a performance problem. IMO.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Still not a problem.

    If you are doing it 10 million times I might start to worry.

    CEWII

  • Elliott W (2/16/2010)


    Still not a problem.

    If you are doing it 10 million times I might start to worry.

    CEWII

    Exactly this the point to worry, it is taking 12 milliseconds to insert ONE record and i have to insert 10 million records.

  • How many indexes do you have? How many are out of order compared to the order of the inserts? And why are you trying to insert 10 million rows one at a time?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • How many indexes do you have?

    There are total of 5 indexes on this table

    - 1 clustered as primary key

    - 3 non clustered created on foreign keys

    - 1 non clustered created on 3 columns

    How many are out of order compared to the order of the inserts?

    initially table is empty and i am inserting records one by one which will sum up to 10 million once the script has run successfully.

    And why are you trying to insert 10 million rows one at a time?

    i am inserting record by record committing after every record insertion into a loop. the reason behind inserting record by record is that there is data transformation from the source to destination table and also as per requirement errors handling is there.

    Hopefully i am able to put information you are after.

  • As Jeff mentions Indexes - it is a valid point that indexes will have an impact on length of time to insert records.

    If i were inserting these records one at a time then I would worry about the length of time to insert the records. It is generally better to batch a big insert like this. There are also other methods to insert 10 million records. One such might be a bulk copy.

    Now if you are inserting 1 record into the table that already has 10 million records, that is again a different story. I would not worry about 12 ms for the single record insert.

    I think you should test inserting more records than 1 as your baseline test for your 10 million records. 1000 records, or even 1 million would scale better to 10 million than would the single record insert.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Anam Verma (2/16/2010)


    How many indexes do you have?

    There are total of 5 indexes on this table

    - 1 clustered as primary key

    - 3 non clustered created on foreign keys

    - 1 non clustered created on 3 columns

    How many are out of order compared to the order of the inserts?

    initially table is empty and i am inserting records one by one which will sum up to 10 million once the script has run successfully.

    And why are you trying to insert 10 million rows one at a time?

    i am inserting record by record committing after every record insertion into a loop. the reason behind inserting record by record is that there is data transformation from the source to destination table and also as per requirement errors handling is there.

    Hopefully i am able to put information you are after.

    Are you using TSQL or SSIS for the data transformation? Just curious. Even with data transformation, TSQL is more efficient in batch.

    SSIS offers the ability to error handle batches, with information for each row that errors.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • hi Jason

    I am using T-SQL for data transformation. I think i have confused everyone on the forum.

    Let me rephrase what i am doing. I have a source table with 10 million records. i wrote a t-sql script to transform, translate and modify the source data which is then finally inserted into destination table. Now this is all done in a loop incremented by one, yes one where a single record is fetched, data modification done and then inserted and finally commited. This way loop runs 10 million times to insert all the records into destination table.

    The reason behind running it in a loop incremented by single record is that there is a trigger defined on destination table. the approach of running a loop was considered the best one after readings on this website and keeping in mind the exisiting business requirements. Lets not get off track from why i am inserting record by record.

    i am only concerned at the moment is, is it normal for SQL server to take 12 milliseconds for committing a single record at the time of insertion to a table (either empty or with 10 million rows)

    Steve:

    Your response of indexes did strike me and that's what i have put in my initial post but i was not confident and thought to confirm the masters of databases

  • I still do not consider it a long amount of time to insert 1 record.

    That said, I would still find a way to break up your data into batches. Due to the requirements you have for this import job, the import will take a while to complete. Process 100,000 records or something like that through your loop and then run another batch after that batch completes.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • My recommendation would be for you to post the transform script and the trigger. Let's see what we can do to convert this to a set based process and still cover the individual record error handling.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • I was kind of stewing on the indexes myself..

    It seems he is starting with an empty table and then filling it up, would it not make more sense to simply drop the indexes, fill the table, and then build them again. This is in addition to the other batching ideas already presented. You would have to re-index the table when you got done anyway for the stats to be even close and why pay for the index updates along the way..

    CEWII

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply