Table insert performance ... please help!

  • I have the following table;

    CREATE TABLE [dbo].[Journal](

    [uuid] [varchar](50) NOT NULL,

    [message_id] [varchar](50) NOT NULL,

    [message_version] [varchar](20) NOT NULL,

    [message_type] [varchar](20) NOT NULL,

    [direction] [varchar](3) NOT NULL,

    [processing_code] [char](1) NOT NULL,

    [event_time] [datetime] NOT NULL,

    [sending_app] [varchar](45) NULL,

    [sending_facility] [varchar](45) NULL,

    [receiving_app] [varchar](45) NULL,

    [receiving_facility] [varchar](45) NULL,

    [person_id] [varchar](45) NULL,

    [person_last_name] [varchar](45) NULL,

    [person_first_name] [varchar](45) NULL,

    [person_dob] [varchar](45) NULL,

    [jcd_path] [varchar](255) NOT NULL,

    [jcd_service_name] [varchar](45) NOT NULL,

    [ack] [nvarchar](max) NULL,

    [message] [nvarchar](max) NOT NULL

    ) ON [PRIMARY]

    GO

    SET ANSI_PADDING OFF

    After the table gets rather large - over 100 GB and 12 million rows insert performance becomes incredibly slow, several minutes to insert a record. This seems to happen quite suddenly. After I delete records (which is also really slow) to reduce its size insert performance improves drastically. There are a few indexes on some of the fields and they not fragmented and there is a job to rebuild/reorg them as necessary.

    This table is used to archive large text data (in message field) as well as storing information about the message.

    Any ideas on why this is? Thanks for looking!

  • Is the databse is set to Auto grow? Doe sthe drive has enough space to grow the datafile?

  • Yes it does autogrow and there is a lot of free space on the drive to grow.

  • what is the auto grow rate? maybe it is too small to accomodate the data writings so it has to auto grow very often?

    Maybe you could assign a much bigger space size for your datafile, or add more data files ...or consider to partition your huge table (if your SQL edition allows).

  • I have presized the database so autogrowth shouldn't be the issue. I'm running out of ideas. There is a stored proc that does the insert row by row. A third party process calls the sp to insert into the large table. I can't figure out the root cause of the issue but I thinking about alternate solutions like staging the data in a seperate table and then moving the data every hour or so to the "big" table. Any other thoughts?

  • Is it just the actual INSERT statement that is slow, or is it the overall execution of the stored procedure you are using that is slow?

    If it's the INSERT statement, are there any triggers on the table?

    If it's the overall stored procedure, you need to do a bit of troubleshooting to find out which part of the procedure is taking all the time.

    Your best bet will be to run SQL Profiler while running the stored procedure, profiling at the SP:Statement level, to try and find out which statement(s) are slowing everything down.

  • There are no tiggers on the table. I think that the issue is size related because everything runs fine and has been running fine until a few weeks ago when the process that calls the sp (which only does an insert) could not insert into the table fast enough to keep up with demand. I just deleted the oldest records from the table to reduce its size and things worked fine again. This has since happened again, again I deleted older records to reduce the size and its working again. The issue is that moving forward I will not be able to just delete the oldest records. Thanks for all the feedback!

  • Do you have a clustered index on the table? If so which column... and what is the distribution Vs. the order of insertion?

    If the clustered index is on a column that is not generally ever-increasing it will require a lot of resorting the table and index(s) with each insert. Needless to say, this is very bad for performance as it causes page splits and a lot of I/O.

    Beyond that, it is a very wide table, with a lot of BIG FAT VARCHAR columns in it. General advice: use ints and avoid varchars wherever possible. If you can't do that maybe you can vertically partition the table in some way... even across two file groups if possible.

    The probability of survival is inversely proportional to the angle of arrival.

  • Size shouldn't really matter here (even it does with a lot of other things :-D).

    Is the stored procedure just inserting a single row, or many rows?

    Are you sure the insert isn't being blocked by another process?

    Can you post the code for the stored procedure... that would help to see if there's anything obvious.

  • One other thought... if this table is just used for archiving stuff I would batch this and use bulk insert instead of inserting one row at a time. Even better if you can drop the indexes during the bulk insert, then rebuild them after.

    The probability of survival is inversely proportional to the angle of arrival.

  • Credits to Navy beans says that the clustered index is on something non-temporal and you're killing the system with page-splits. Page splits stop when you delete some of the data which makes room on pages so they don't need to split.

    Run DBCC SHOWCONTIG on the table to see what the fragmentation has become... if you have a much larger number of extent switches than extents, page splitting is likely the problem... there are exceptions, of course.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • The sp is just a single simple insert. The values to be inserted are passed as parameters. Just insert into table values ...

    Thanks everyone for there posts and feedback. I have some good ideas to work with.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply