Have to use force recompile to get the SP to use indexes

  • Just an aside, are you really sure that you want to be using nolock around financial transactions? I've worked in banks before, they're not usually happy with the possibility of inconsistent results.

    See - http://sqlblog.com/blogs/andrew_kelly/archive/2009/04/10/how-dirty-are-your-reads.aspx

    Back to the original problem, how many columns are there in the select clause? What indexes do you have on the table t?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • ALZDBA (12/16/2009)


    How well organized is your T2.Id index ?

    How well organized is your clustering index ? (cluster ratio)

    - has your index (table) been rebuild lately ?

    - Did you load the data in sequence ?

    - Did someone add data that would cause inserts in between existing data (and not just add data to the "end" of the file)

    Yes it is rebuilt periodically. The date column is Created, so yes it has been loaded in sequence and is well organized. 🙂 There are no inserts in existing data, but there are updates.

    If this gives anyone any useful information:

    Selecting one row (top 1) from today (based on Created > '2009-12-16') consumes 4 logical reads and 0 physical reads using a clustered index seek.

    Selecting all rows the last hour (approx 6000 rows) consumes 617 logical reads and 0 physical reads using a clustered index seek.

    The total number of rows in the table is 17 million rows.

    And to Gail:

    Yes, we are confident using nolock in the given scenario.

    The table consists of 37 columns, and they are all returned from this stored procedure.

    There are 10 indexes on the table:

    One clustered index on Created.

    7 non-unique indexes and 2 unique indexes

    No covering indexes.

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • And just to clarify - in my opinion we have satisfactory performance on this table at the moment, the only issue was this particular SP that suddenly wasn't using the index as it was supposed to - and seem to be suffering from parameter sniffing.

    I have not requested hardcore optimizing of the SP or the table (any feedback is of course appreciated anyway). 🙂

    So far it seems that the RECOMPIILE hint probably is the best solution to this issue. The stored procedure is not part of the online financial / transactional flow, it is a list feature from our admin system that is used occasionally (maybe a few times per hour) - so the cpu overhead shouldn't be a significant issue.

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • one-1016367 (12/16/2009)


    Yes, we are confident using nolock in the given scenario.

    You are aware that there is a chance it can can cause rows to be returned more than once, rows to be completely skipped?

    The table consists of 37 columns, and they are all returned from this stored procedure.

    Then a covering index isn't really an option. That's what I was getting at.

    The optimiser generally won't pick an index if it's not covering and the query will return a significant portion of the table (and by 'significant I actually mean > 1%). Hence what's likely happening here is that if the parameters are such that there aren't many rows returned, the optimiser picks the seek with key lookup and when there are lots of rows, a scan.

    Option recompile on the query or, if one extreme is more common than the other, option optimise for with a parameter value that produces a plan that's OK for both extremes.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/16/2009)


    one-1016367 (12/16/2009)


    Yes, we are confident using nolock in the given scenario.

    You are aware that there is a chance it can can cause rows to be returned more than once, rows to be completely skipped?

    I wasn't aware that I could get the same row more than once, in which scenario could that happen?

    It wouldn't be critical in this scenario, but should of course be avoided. But we've been running like this for several years and never heard about that happening.

    EDIT: Found some information here http://blogs.msdn.com/davidlean/archive/2009/04/06/sql-server-nolock-hint-other-poor-ideas.aspx. I'll take this into consideration.

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • one-1016367 (12/16/2009)


    I wasn't aware that I could get the same row more than once, in which scenario could that happen?

    The link I gave you when I first asked about it discusses all the possible effects. That's why I posted the link.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/16/2009)


    one-1016367 (12/16/2009)


    I wasn't aware that I could get the same row more than once, in which scenario could that happen?

    The link I gave you when I first asked about it discusses all the possible effects. That's why I posted the link.

    I apologize, I'm obviously multitasking beyond the capability of my brain at the moment, and reading these posts way to fast. I missed the link alltogether. I'll read it. Thank you for your comments on this issue.

    And I assure you that we do not use nolock in connection with the actual financial transactions, this is for listing reports in our admin (which is not used for reconciliation). But I see that we may need to reconsider this strategy as well.

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • After doing some reading on the issue it seems that both missing rows and counting rows twice is connected to page splits.

    As we have a clustered index on a column where getdate() is inserted, a page split due to inserts is very unlikely.

    Updates of existing rows that cause the page length to increase could also cause this issue. We do not do this.

    A less likely issue may be that a row has actually decreased in size, and rows are moved to another page during defragmenting of an index. I guess that might be a possible scenario in our database, even though defragmenting is done during low peak hours at night, when people typically isn't sitting around retrieving reports from the system.

    I must conclude that missing rows or counting them twice isn't a very likely scenario, but the fact that it can happen at all is very interesting. 🙂

    I also note that Sql Server 2005 supports READ_COMMITTED_SNAPSHOT on the database level, which may be worth looking into. Not sure what this may lead to in performance degradation.

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • If you are considering READ_COMMITTED_SNAPSHOT first check out the impact of its usage ( tempdb ) !

    If you have a single Insert/update/delete window, maybe putting the db in read_only mode for reporting purposes ( i.e. for the rest of the day) may be a valid option to avoid any locking related issues.

    Or actually implement snapshot database(s). ( space / io / impact)

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Yep.

    Anyway, we have not used NOLOCK randomly, it is explicitly used where the system has been designed in a way where dirty reads is either okay or will not happen in practice.

    We have not considered the impact of changes in pages due to internal system changes (like defragmentation) causing page splits / reorganisation which in turn may corrupt the reads, but I don't think this is a large issue in our case / system.

    But this is interesting information that we will of course consider in the future. 🙂

    Thorbjorn Kvam,
    Project manager and DBA (design) at Payex (http://www.payex.com)

  • one-1016367 (12/16/2009)


    I must conclude that missing rows or counting them twice isn't a very likely scenario, but the fact that it can happen at all is very interesting. 🙂

    That was mostly my point. I find lots of people use nolock with abandon, without knowing what it does.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 11 posts - 31 through 40 (of 40 total)

You must be logged in to reply to this topic. Login to reply