Identify cause of high logical reads

  • Hello,

    I have a query that takes 3 minutes to run on a db (lets call it DB1). The database is also acting as the publisher in a replication setup.

    The same query takes less than a second to run on one of the subscription databases (call it DB2).

    The query execution plans appear to be identical. The index fragmentation is less than 2% on the tables involved.

    When running Statistics IO the only difference appears to be the logical reads. DB2 indicates around 3400 logical reads while DB1 is almost 7 million:crazy:?

    Could it be fragmentation on the physical file database file?

    At this point I'm not sure what else to look at to narrow down the case, any help will be greatly appreciated.

    G

  • It won't be fragmentation of any form. Logical reads = reads from memory.

    Post the two stats IO outputs and the two execution plans (as .sqlplan files) please.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gotta be a difference in the stats. Should be evident by having different execution plans.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Grant Fritchey (1/14/2011)


    Gotta be a difference in the stats. Should be evident by having different execution plans.

    Or there's a row-estimation error somewhere, the plans are the same but the actual row counts are not.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi thank you both for your feedback!

    It seems that Gila's assumption is correct, at first glance the execution plans for db1 and db2 looked the same in terms of operations and cost % but there is a large discrepancy in the estimated and actual row count for the table with high logical reads.

    Actual - 479,997,630

    Estimated - 146,118

    I've Updated the statistics with FullScan for the table but this did not seem to have had an effect. I'm pretty green when it comes to these things so will need to do some reading up.

    Thank you again for your help!

    G

  • Yeah, that'll do it.

    Post the queries and plans?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi and apologies for my splintered responses.

    Below are the sats IO results and the execution plans. I believe the sql query can be obtained from the execution plan (right click -> Edit Query text).

    DB1 is the problem db. DB2 is a development db that I've been using for comparison (data is around 2 months older than db1).

    The row count for the particular pTransaction table is around 146 000 on db1 and 134 711 on db2.

    StatsIO DB1

    Table 'pRole'. Scan count 0, logical reads 10437, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'pCore'. Scan count 0, logical reads 20877, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'pTransaction'. Scan count 1, logical reads 7139364, physical reads 0, read-ahead reads 2013, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'pTransactionGroup'. Scan count 1, logical reads 996, physical reads 0, read-ahead reads 167, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    StatsIO DB2

    Table 'pRole'. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'pCore'. Scan count 0, logical reads 12, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'pTransaction'. Scan count 1, logical reads 2814, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'pTransactionGroup'. Scan count 1, logical reads 2634, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

  • Your problem/difference is pTransaction. It's doing a clustered index scan in both cases, but the data volume is hugely different.

    ~134 thousand rows in Db2 in the base table.

    ~480 million rows in DB1 in the base table.

    This will, of course, make a bit of a difference.

    Of specific interest is the XML which states you've got a missing index on pTransaction.pTransactionID, and wants to include pEntryId, pTransactionGroupID, Amoutn, Created Date, CreatedByEntityID, LastModifiedDate, and LastModifiedByEntityID.

    You need to restrict the volume of data from pTransactoin getting into the execution plan at all. This means some kind of where clause that restricts data directly from that table, or a directly seekable result from something else's where clauses.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Hi everyone.

    Thanks for taking time to share your knowledge.

    The solution I've taken is to update the statistics with full scan on the two tables pTransaction and pTransactionGroup.

    Initially I only updated the Stats on pTransaction (the one with the high reads) but that did not help, updating pTransactionGroup did the trick. The execution plan has also changed with that action.

    Regards,

    G

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply