CXPACKET too high

  • Dear friends,

    I am running the following script to get the wait stats:

    WITH Waits AS

    (

    SELECT

    wait_type,

    wait_time_ms / 1000. AS wait_time_s,

    100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,

    ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn,

    100. * signal_wait_time_ms / wait_time_ms as signal_pct

    FROM sys.dm_os_wait_stats

    WHERE wait_time_ms > 0

    AND wait_type NOT LIKE N'%SLEEP%'

    AND wait_type NOT LIKE N'%IDLE%'

    AND wait_type NOT LIKE N'%QUEUE%'

    AND wait_type NOT IN( N'CLR_AUTO_EVENT'

    , N'REQUEST_FOR_DEADLOCK_SEARCH'

    , N'SQLTRACE_BUFFER_FLUSH'

    /* filter out additional irrelevant waits */ )

    )

    SELECT

    W1.wait_type,

    CAST(W1.wait_time_s AS NUMERIC(12, 2)) AS wait_time_s,

    CAST(W1.pct AS NUMERIC(5, 2)) AS pct,

    CAST(SUM(W2.pct) AS NUMERIC(5, 2)) AS running_pct,

    CAST(W1.signal_pct AS NUMERIC(5, 2)) AS signal_pct

    FROM Waits AS W1

    JOIN Waits AS W2

    ON W2.rn <= W1.rn

    GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct, W1.signal_pct

    HAVING SUM(W2.pct) - W1.pct < 80 -- percentage threshold

    OR W1.rn <= 5

    ORDER BY W1.rn;

    GO

    I get the following result:

    CXPACKET 1107177.19 95.65 95.65 5.94

    SOS_SCHEDULER_YIELD 15060.96 1.30 96.95 100.00

    PAGEIOLATCH_SH 10618.83 0.92 97.86 0.13

    ASYNC_NETWORK_IO 7497.67 0.65 98.51 0.99

    WRITELOG 6786.71 0.59 99.10 0.19

    The server is 16 CPUs. I have changed the maxdop to 12...I have tried different values of cost threshold for parallelis but could not get the PCT down for CXPACKET.;

    Could you please help?

  • I suspect you need to drop your MAXDOP significantly lower that 12 to see any real difference, but be aware that you could adversely affect some of your queries by doing that. A good number to start with might be 4.

  • Yeah I would agree with that, as a starting point.

    --------------------------------------------------------------------------------------
    [highlight]Recommended Articles on How to help us help you and[/highlight]
    [highlight]solve commonly asked questions[/highlight]

    Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
    Managing Transaction Logs by Gail Shaw[/url]
    How to post Performance problems by Gail Shaw[/url]
    Help, my database is corrupt. Now what? by Gail Shaw[/url]

  • I have tried with 2, 4 ,8 and then 12.. but the number/pct is not going down for cxpacket.

  • CXPACKET waits are only going to apply with parallelism; you should have less issues with a lower number. If you want to reduce the waits then you reduce the MAXDOP in steps until the waits are acceptable.

    Are you clearing the wait stats between tests? They are incremental and are only reset on request or on server startup.

    Run DBCC SQLPERF ('sys.dm_os_wait_stats' , CLEAR) to clear them.

  • Changing MAXDOP only makes the CXPACKET waits go away, but doesn't cure the problem. You'll get some cycles back from not having to assemble the streams, but the bottom line is to find the queries that are having high CXPACKET waits, and fix them.

    The likely candidates are table scans. Forcing those down to a single CPU hides the CXPACKET waits, but you're still chewing up lots of CPU and I/O.

    -Eddie

    Eddie Wuerch
    MCM: SQL

  • Per the article I referenced you may want to check your statistics too

  • Hi

    The problem looks like a Parallelism proble, please try to usa @nalytics Performance Free Data Collector for Microsoft SQL Server & Windows Server, this tool can help you to solve your performance problems

    Regards

    support.sql@gmail.com

    @Analytics Peformance - Microsoft SQL Server & Windwos Server Data Collector

    http://www.analyticsperformance.com/

    SNM

    Try Free Microsoft SQL Server Data Collector & Performance Monitor.

    http://www.analyticsperformance.com[/url]

    @nalyticsperformance - Microsoft SQL Server & Windows Server Free Data Collector

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply