SQL 2014 how is the performance if continue call Native Compiler SP

  • Hi all,

    I've tested a simple insert statement using Native Compiler SP. It's was working fast if compare to normal SP. when i continuous to call the same Native Compiler SP few times. The performance is exactly same as the normal SP. Anyone know why?

  • Without seeing the query plan, the structures, the query... nope, not really.

    My tests show a pretty consistent faster behavior for the natively compiled procedures.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • May i know how is your test script? have you try the while loop vs normal insert statement for 10k?

  • A WHILE loop? No, I haven't done that sort of testing at all. It's not something I'd write within T-SQL or expect to perform well, natively compiled or not.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Im using native compiler procedures vs normal procedures. my script as below.

    --- Normal SP

    CREATE PROCEDURE xx_insert_Normal

    AS

    BEGIN TRAN

    DECLARE @i INT = 1

    WHILE @i <= 100000

    BEGIN;

    INSERT INTO [dbo].[memorytable] VALUES (@i, 'data')

    SET @i += 1;

    END;

    COMMIT

    END

    --- Natively compiled SP

    CREATE PROCEDURE xx_insert_NC

    WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER

    AS

    BEGIN ATOMIC

    WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')

    DECLARE @i INT = 1;

    WHILE @i <= 100000

    BEGIN;

    INSERT INTO [dbo].[memorytable] VALUES (@i, 'data');

    SET @i += 1;

    END;

    END;

  • I've done all the looping outside the procedure.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • What's the DDL for memorytable?

    If you're making it durable, then you may be bottlenecking on I/O - either for transaction logging, or for the sequential write out to data files. The Transaction logging is virtually the same regardless of using durable memory-optimised tables or disk-resident tables.

    The WHILE loop itself carries quite a large overhead and is obviously serial - you may also bottleneck here, long before you reach your I/O or interpretation bottleneck.

    Hekaton is all about parallel concurrency. A more realistic test would be to drive an external parallel application to execute inserts and ramp them up over time, measuring the throughput differences between the two. If you're not bottlenecking on transaction logging, and you have spinning disks, in-memory + native compilation will undoubtedly win out on a pure singleton insert test as it only performs sequential writes, but maybe not by as much as you think. If you're on SSDs, the difference may be negligible as random write performance can be nearly the same as sequential.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply