To reduce CPU cost being used by SORT

  • We are using a query and in its execution plan it utilizes 182% CPU cost while performing a sort. Please advice on how to reduce it.

    query structure:

    SELECT DISTINCT

    'Dupility' as Rery_Cls,

    'LINE' Rery_Tpe,

    b.clpe,

    b.Check_Ref_ID,

    min(a.claim_id) oriim,

    convert(varchar,getdate(),101) Load_Date,

    c.year_month_num load_month,

    0 as include_flag,

    0 as isreviewed

    from DBO.PR_PaieDups a with(nolock, index(PR_PaidCls1))

    join DBO.PR_Paps b with(nolock, index(PR_Paups1))

    on a.subscriber_id = b.subscriber_id --Subscriber

    join dw.dbo.date c with(nolock) on convert(varchar,getdate(),101) = convert(varchar,c.date_id,101)

    left outer join CIes z on b.claim_id = z.claim_nbr and z.recovery_class = 'Dupllity'

    left outer join tbl_ble1 y on b.claid = y.claim_nbr and y.re_css = 'Dupllity'

    where

    a.from_date = b.from_date --From Date (DOS)

    and a.se_id = b.seer_id --Provider ID

    and a.billed_amount = b.billed_amount --Billed Amount

    and a.procode = b.procde --Proc Code

    and a.tos = b.tos --TOS

    and a.rede = b.revode --Rev Code

    and a.pos = b.pos --POS

    and a.mod_1 = '' and b.mod_1 = '' --null modifiers

    and a.mod_2 = '' and b.mod_2 = ''

    and a.mod_3 = '' and b.mod_3 = ''

    and a.mod_4 = '' and b.mod_4 = ''

    and a.root_claim_id < b.root_claim_id --original claim ID is less than the duplim ID

    and b.claicd = 'H' --Facility Claims

    and b.pait > 0

    and a.claidate <= b.cl_date --original received before duplicate received

    and z.clabr is null

    and y.clabr is null

    group by

    b.clape,

    b.suid,

  • Some readings for you:

    How to post performance problems[/url]

    Consistency problems with NOLOCK

    TIMEBOMB - THE CONSISTENCY PROBLEM WITH NOLOCK / READ UNCOMMITTED

    -- Gianluca Sartori

  • We are using a query and in its execution plan it utilizes 182% CPU cost while performing a sort. Please advice on how to reduce it.

    query structure:

    SELECT DISTINCT

    'Dupility' as Rery_Cls,

    'LINE' Rery_Tpe,

    b.clpe,

    b.Check_Ref_ID,

    min(a.claim_id) oriim,

    convert(varchar,getdate(),101) Load_Date,

    c.year_month_num load_month,

    0 as include_flag,

    0 as isreviewed

    from DBO.PR_PaieDups a with(nolock, index(PR_PaidCls1))

    join DBO.PR_Paps b with(nolock, index(PR_Paups1))

    on a.subscriber_id = b.subscriber_id --Subscriber

    join dw.dbo.date c with(nolock) on convert(varchar,getdate(),101) = convert(varchar,c.date_id,101)

    left outer join CIes z on b.claim_id = z.claim_nbr and z.recovery_class = 'Dupllity'

    left outer join tbl_ble1 y on b.claid = y.claim_nbr and y.re_css = 'Dupllity'

    where

    a.from_date = b.from_date --From Date (DOS)

    and a.se_id = b.seer_id --Provider ID

    and a.billed_amount = b.billed_amount --Billed Amount

    and a.procode = b.procde --Proc Code

    and a.tos = b.tos --TOS

    and a.rede = b.revode --Rev Code

    and a.pos = b.pos --POS

    and a.mod_1 = '' and b.mod_1 = '' --null modifiers

    and a.mod_2 = '' and b.mod_2 = ''

    and a.mod_3 = '' and b.mod_3 = ''

    and a.mod_4 = '' and b.mod_4 = ''

    and a.root_claim_id < b.root_claim_id --original claim ID is less than the duplim ID

    and b.claicd = 'H' --Facility Claims

    and b.pait > 0

    and a.claidate <= b.cl_date --original received before duplicate received

    and z.clabr is null

    and y.clabr is null

    group by

    b.clape,

    b.suid,

  • The following two items in your code above...

    and a.root_claim_id < b.root_claim_id --original claim ID is less than the duplim ID

    and a.claidate <= b.cl_date --original received before duplicate received

    ... probably consitute a "Triangular Join". Please see the following article for why that's such a bad thing...

    http://qa.sqlservercentral.com/articles/T-SQL/61539/

    Also, anytime you see a "SELECT DISTINCT" in code, you may simply not have the proper criteria in the ON and WHERE clauses which can cause a huge number of rows to be "touched" behind the scenes.

    My recommendation is to reevaluate the code and divide it up to first return the smallest number of rows necessary first and then join the other tables to that using either a CTE or (sometimes better yet) a Temp Table to resolve the smallest number of rows necessary first. I've changed many a 45 minute query to one that resolves in just a couple of seconds using the "Divide'n'Conquer" methods I've just described.

    As a side bar, trying to do everything in a single query doesn't necessarily mean the code is "Set Based". In fact, it can mean quite the opposite and can cause huge amounts of "Hidden RBAR".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • And if you really want someone to begin to evaluate the effects of your code choices on the structure, bare minimum, you need to include the actual execution plan, but even better, you can provide scripts for the structure and some sample data.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply