Stop query if it is taking too long

  • mah_j (10/23/2016)


    I created the filtered covering index on InvoiceUID as you said,(run dbcc freeproccache and dbcc dropcleanbuffers) but it was not usable.Again it uses the IX_invoice index and the plan dose not change . also the elapsed time increases over 1 minute.

    Another thing is that the parameters are dynamic and they change each time.(the queries are created from LINQ and run with sp_executesql)

    I agree with Eirikur... more than 900 thousand seeks to return anything is going to be slow and very logical read intensive. The existing indexes are the wrong index for this because it's doing all those seeks instead of single seek with a nice range scan after that.

    No recommendation, yet... still looking.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • mah_j (10/22/2016)


    I am so sorry,I asked my question in a wrong way.Yes I know that we have problem in one of our queries,and I am involved.(The logical io is too high(3939737 for 43300 record) and after update statistics the estimated number of rows is still 1.Total number of extents for that table is 1284925.)

    So because of this problem we have time out in insert statement.I want to know if there is a way to kill these queries until I find the solution for the query performance.

    At the moment developers set the app's timeout to 40 seconds,but the query is still running after it shows timeout error to the users.

    Thank you for the clarification. THAT helps a whole lot. I really have a problem with "Ivory Tower" DBAs and this certainly looked like that. Glad to see that's not true and I appreciate you taking the time to show that you're not "one of those".

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • As a bit of a sidebar, I'm concerned for you good folks. One of the indexes you posted appears to have "CardNumber" as a clear text value. I hope I'm misreading that. PCI specs and most specs on PII say that such things must be encrypted.

    Could you attach a copy of what the Clustered Indexes look like (I see you posted 1 already but a single attachment would be convenient) and all the Non Clustered Indexes for the two tables in question? It'll help with analysis.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden (10/23/2016)


    As a bit of a sidebar, I'm concerned for you good folks. One of the indexes you posted appears to have "CardNumber" as a clear text value. I hope I'm misreading that. PCI specs and most specs on PII say that such things must be encrypted.

    Could you attach a copy of what the Clustered Indexes look like (I see you posted 1 already but a single attachment would be convenient) and all the Non Clustered Indexes for the two tables in question? It'll help with analysis.

    LINQ, EF and other such things, makes you wonder doesn't it?

    😎

    I was taken at back by the CC info and other potentially sensitive data, hope that is encrypted!

  • Eirikur Eiriksson (10/23/2016)


    Jeff Moden (10/23/2016)


    As a bit of a sidebar, I'm concerned for you good folks. One of the indexes you posted appears to have "CardNumber" as a clear text value. I hope I'm misreading that. PCI specs and most specs on PII say that such things must be encrypted.

    Could you attach a copy of what the Clustered Indexes look like (I see you posted 1 already but a single attachment would be convenient) and all the Non Clustered Indexes for the two tables in question? It'll help with analysis.

    LINQ, EF and other such things, makes you wonder doesn't it?

    😎

    I was taken at back by the CC info and other potentially sensitive data, hope that is encrypted!

    You said a mouthful. About 2 years ago, I did some part-time work for a small company with a large footprint. They used SSNs in the clear for primary keys. Their compliance officer said that it's not a problem because they were behind a firewall and, besides, the Social Security Administration web site said that (although it was encouraged and recommended on this site) it's not required (and, it's not. Go figure). When I told her that she should demonstrate her faith in the system by adding her own PII, including SSN, to the system, she flat out refused. After that, I recommended that the company should find a new compliance officer. Of course, that didn't happen because it would cost some money to change the systems to encrypt SSNs.

    If their system is ever compromised, they'll find out just how much damages will cost them. Just the fees for follow up credit and misuse monitoring will kill the company, not to mention the legal fees for anyone that decides to sue the company.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Thank you a lot for your attention and reply .

    But the cost of index seek for IX_Invoice is 1% and the logical reads for Invoice table is just 14286.

    Table 'Transaction'. Scan count 919840, logical reads 3939737, physical reads 146, read-ahead reads 444, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'Invoice'. Scan count 1, logical reads 14286, physical reads 63, read-ahead reads 21938, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Invoice table

    Data space 15,467.375 MB

    Rowcount 32561069

    Transaction table

    Data space 70,352.797 MB

    Rowcount 30423982

    Total number of extents for Transaction is 1284925.

    Total number of extents for Invoice is 327445.

    for IX_Transaction_InvUID index Extent Scan Fragmentation

    was 90% afre drop and recreate index and then rebuild I succeeded to decrease it to 69% and then elapsed time for query became 25 second.

    DBCC SHOWCONTIG scanning 'Transaction' table...

    Table: 'Transaction' (1397580017); index ID: 8, database ID: 7

    LEAF level scan performed.

    - Pages Scanned................................: 121696

    - Extents Scanned..............................: 15212

    - Extent Switches..............................: 15211

    - Avg. Pages per Extent........................: 8.0

    - Scan Density [Best Count:Actual Count].......: 100.00% [15212:15212]

    - Logical Scan Fragmentation ..................: 0.01%

    - Extent Scan Fragmentation ...................: 69.18%

    - Avg. Bytes Free per Page.....................: 1596.0

    - Avg. Page Density (full).....................: 80.28%

    For PK_Transaction

    DBCC SHOWCONTIG scanning 'Transaction' table...

    Table: 'Transaction' (1397580017); index ID: 1, database ID: 7

    TABLE level scan performed.

    - Pages Scanned................................: 9005158

    - Extents Scanned..............................: 1132244

    - Extent Switches..............................: 1470743

    - Avg. Pages per Extent........................: 8.0

    - Scan Density [Best Count:Actual Count].......: 76.54% [1125645:1470744]

    - Logical Scan Fragmentation ..................: 33.81%

    - Extent Scan Fragmentation ...................: 47.58%

    - Avg. Bytes Free per Page.....................: 2388.4

    - Avg. Page Density (full).....................: 70.49%

    for IX_Invoice

    DBCC SHOWCONTIG scanning 'Invoice' table...

    Table: 'Invoice' (178099675); index ID: 15, database ID: 7

    LEAF level scan performed.

    - Pages Scanned................................: 479444

    - Extents Scanned..............................: 60298

    - Extent Switches..............................: 180199

    - Avg. Pages per Extent........................: 8.0

    - Scan Density [Best Count:Actual Count].......: 33.26% [59931:180200]

    - Logical Scan Fragmentation ..................: 30.89%

    - Extent Scan Fragmentation ...................: 96.19%

    - Avg. Bytes Free per Page.....................: 1235.7

    - Avg. Page Density (full).....................: 84.73%

    for PK_Invoice

    DBCC SHOWCONTIG scanning 'Invoice' table...

    Table: 'Invoice' (178099675); index ID: 1, database ID: 7

    TABLE level scan performed.

    - Pages Scanned................................: 1979824

    - Extents Scanned..............................: 248682

    - Extent Switches..............................: 251615

    - Avg. Pages per Extent........................: 8.0

    - Scan Density [Best Count:Actual Count].......: 98.36% [247478:251616]

    - Logical Scan Fragmentation ..................: 4.11%

    - Extent Scan Fragmentation ...................: 91.68%

    - Avg. Bytes Free per Page.....................: 301.1

    - Avg. Page Density (full).....................: 96.28%

    All Extent Scan Fragmentation are over 90% and I can not decrease them to 0.

  • Quick suggestion, not tested so review carefully before applying to any production database.

    😎

    To satisfy this query

    SELECT

    [t0].[CardNumber]

    , [t0].[ReferenceNumber]

    , [t0].[TransactionDate]

    , [t0].[SeqCounter]

    , [t0].[IsSuccess]

    , [t0].[TransKind]

    , [t1].[Amount]

    , [t1].[InvoiceUID]

    , [t1].[InvoiceNumber]

    , [t1].[InvoiceDate]

    , [t1].[VerificationState]

    , [t1].[TermID]

    , [t0].[UnqReferenceNumber]

    FROM [dbo].[Transaction] [t0]

    INNER JOIN [dbo].[Invoice] [t1]

    ON [t0].[InvoiceUID] = ([t1].[InvoiceUID])

    WHERE ([t1].[Amount] <= @p0)

    AND ([t1].[Amount] >= @p1)

    AND ([t0].[IsSuccess] = @p2)

    AND ([t1].[VerificationState] <> @p3)

    AND ([t1].[TermID] = @p4)

    AND ([t0].[TransKind] = @p5)

    AND ([t0].[TransactionDate] >= @p6)

    AND ([t0].[TransactionDate] <= @p7)

    AND ([t1].[MerchID] = @p8)

    ORDER BY [t0].[TransactionDate] DESC

    you will need this index on the dbo.Transaction

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Transaction]

    (

    [InvoiceUID] ASC,

    [IsSuccess] ASC,

    [TransKind] ASC,

    [TransactionDate] ASC

    )

    INCLUDE ( [CardNumber],[ReferenceNumber],[SeqCounter],[UnqReferenceNumber]);

    and this index for the dbo.Invoice

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Invoice]

    (

    [InvoiceUID] ASC,

    [Amount] ASC,

    [VerificationState] ASC,

    [TermID] ASC,

    [MerchID] ASC

    )

    INCLUDE ( [InvoiceNumber],[InvoiceDate]);

    Question, can you change the order of which the parameters are applied in the generated query?

  • Eirikur Eiriksson (10/24/2016)


    you will need this index on the dbo.Transaction

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Transaction]

    (

    [InvoiceUID] ASC,

    [IsSuccess] ASC,

    [TransKind] ASC,

    [TransactionDate] ASC

    )

    INCLUDE ( [CardNumber],[ReferenceNumber],[SeqCounter],[UnqReferenceNumber]);

    I came up with similar but was really torn and didn't post it... can't test it because I don't have the time to setup the test data but, it seems to me that it may have the same multi-seek problem as the original index. I was thinking that TransactionDate, TransKind, IsSuccess with InvoiceUID as a part of the include would allow for an initial seek followed by a high performance range scan.

    Using TransKind as the initial column would likely prove even faster but that would be pretty tough on inserts in the form of index page splits unless a sweet-spot FILL FACTOR was also found.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • This should be enough:

    ---------------------[dbo].[Transaction]-----------------

    ALTER TABLE [dbo].[Transaction] ADD CONSTRAINT [PK_Transaction] PRIMARY KEY CLUSTERED

    (

    [TransactionDate] ASC,

    [ReferenceNumber] ASC

    )

    The rest of the parameters, as I can understand, are optional, and adding them to the index may just cause choosing not optimal plans for different parameter combinations.

    _____________
    Code for TallyGenerator

  • you will need this index on the dbo.Transaction

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Transaction]

    (

    [InvoiceUID] ASC,

    [IsSuccess] ASC,

    [TransKind] ASC,

    [TransactionDate] ASC

    )

    INCLUDE ( [CardNumber],[ReferenceNumber],[SeqCounter],[UnqReferenceNumber]);

    and this index for the dbo.Invoice

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Invoice]

    (

    [InvoiceUID] ASC,

    [Amount] ASC,

    [VerificationState] ASC,

    [TermID] ASC,

    [MerchID] ASC

    )

    INCLUDE ( [InvoiceNumber],[InvoiceDate]);

    I created both indexes,but when I run the query ,it uses non of them.

  • mah_j (10/29/2016)


    you will need this index on the dbo.Transaction

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Transaction]

    (

    [InvoiceUID] ASC,

    [IsSuccess] ASC,

    [TransKind] ASC,

    [TransactionDate] ASC

    )

    INCLUDE ( [CardNumber],[ReferenceNumber],[SeqCounter],[UnqReferenceNumber]);

    and this index for the dbo.Invoice

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Invoice]

    (

    [InvoiceUID] ASC,

    [Amount] ASC,

    [VerificationState] ASC,

    [TermID] ASC,

    [MerchID] ASC

    )

    INCLUDE ( [InvoiceNumber],[InvoiceDate]);

    I created both indexes,but when I run the query ,it uses non of them.

    Question, can you change the order of which the parameters are applied in the generated query?

    😎

  • mah_j (10/29/2016)


    you will need this index on the dbo.Transaction

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Transaction]

    (

    [InvoiceUID] ASC,

    [IsSuccess] ASC,

    [TransKind] ASC,

    [TransactionDate] ASC

    )

    INCLUDE ( [CardNumber],[ReferenceNumber],[SeqCounter],[UnqReferenceNumber]);

    and this index for the dbo.Invoice

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Invoice]

    (

    [InvoiceUID] ASC,

    [Amount] ASC,

    [VerificationState] ASC,

    [TermID] ASC,

    [MerchID] ASC

    )

    INCLUDE ( [InvoiceNumber],[InvoiceDate]);

    I created both indexes,but when I run the query ,it uses non of them.

    That would be expected.

    You have 2range selections - in Amount in Invoice and on TransactionDate in Transaction table.

    Useful indexes must have those colums in front:

    [Amount] ASC,

    [TermID] ASC,

    [MerchID] ASC

    [TransactionDate] ASC

    [TransKind] ASC,

    [IsSuccess] ASC,

    _____________
    Code for TallyGenerator

  • mah_j (10/29/2016)


    I created both indexes,but when I run the query ,it uses non of them.

    No matter which indexes you use, you have to do a check to make sure that the Join and selection criteria are both SARGable or it won't use even the most perfect index.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden (10/24/2016)


    Eirikur Eiriksson (10/24/2016)


    you will need this index on the dbo.Transaction

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Transaction]

    (

    [InvoiceUID] ASC,

    [IsSuccess] ASC,

    [TransKind] ASC,

    [TransactionDate] ASC

    )

    INCLUDE ( [CardNumber],[ReferenceNumber],[SeqCounter],[UnqReferenceNumber]);

    I came up with similar but was really torn and didn't post it... can't test it because I don't have the time to setup the test data but, it seems to me that it may have the same multi-seek problem as the original index. I was thinking that TransactionDate, TransKind, IsSuccess with InvoiceUID as a part of the include would allow for an initial seek followed by a high performance range scan.

    Using TransKind as the initial column would likely prove even faster but that would be pretty tough on inserts in the form of index page splits unless a sweet-spot FILL FACTOR was also found.

    Yes, I pondered on it and decided to post it with the question of whether the applied order of predicates was within the OP's control, obviously no answer yet on that one.

    😎

  • Eirikur Eiriksson (10/29/2016)


    Jeff Moden (10/24/2016)


    Eirikur Eiriksson (10/24/2016)


    you will need this index on the dbo.Transaction

    CREATE UNIQUE NONCLUSTERED INDEX [INDEX_NAME] ON [dbo].[Transaction]

    (

    [InvoiceUID] ASC,

    [IsSuccess] ASC,

    [TransKind] ASC,

    [TransactionDate] ASC

    )

    INCLUDE ( [CardNumber],[ReferenceNumber],[SeqCounter],[UnqReferenceNumber]);

    I came up with similar but was really torn and didn't post it... can't test it because I don't have the time to setup the test data but, it seems to me that it may have the same multi-seek problem as the original index. I was thinking that TransactionDate, TransKind, IsSuccess with InvoiceUID as a part of the include would allow for an initial seek followed by a high performance range scan.

    Using TransKind as the initial column would likely prove even faster but that would be pretty tough on inserts in the form of index page splits unless a sweet-spot FILL FACTOR was also found.

    Yes, I pondered on it and decided to post it with the question of whether the applied order of predicates was within the OP's control, obviously no answer yet on that one.

    😎

    Index with a GUID column on the 1st position will be useless for almost all kind of queries, except the ones having ColumnUID = @ParamUID.

    And they definitely have no use for "range selection" queries, like the one in the starting post.

    _____________
    Code for TallyGenerator

Viewing 15 posts - 16 through 30 (of 30 total)

You must be logged in to reply to this topic. Login to reply