Optimize for ad hoc workloads option and measuring results

  • Reading up on enabling the "Optimize for ad hoc workloads" option and have some questions. I have 144GB of memory installed on SQL box, so I'm not sure how much affect this will have.

    - After enabling, should I compare the before and after results of the memory occupied by adhoc plans only used once?

    - What else should I be measuring?

    Total Num of adhoc plans : 70461

    Current memory occupied by adhoc plans only used once (MB): 4098.80

    Total cache plan size (MB) : 8299.30

    % of total cache plan occupied by adhoc plans only used once: 49.39%

  • Why do you think you have a problem?

  • Ninja's_RGR'us (9/20/2011)


    Why do you think you have a problem?

    Just trying to squeeze every ounce of Performance out of my system. Looks like I am wasting about 4GB of memory and 49% of total cache plan size.

  • That's not where I'd start but I guess you're way down on the rabbit whole?

    Any proof of memory pressure or io read issues? If not I probably wouldn't waste much time on this.

    Maybe someone has more experience with this and can safely say that there's no possible drawback in doing this, but that's not me.

  • Ninja's_RGR'us (9/20/2011)


    That's not where I'd start but I guess you're way down on the rabbit whole?

    Any proof of memory pressure or io read issues? If not I probably wouldn't waste much time on this.

    Maybe someone has more experience with this and can safely say that there's no possible drawback in doing this, but that's not me.

    You're right, I am further down the rabbit hole. I have a long checklist of performance enhancements/best practices that I am gradually working through.

    From what I've read, there are significant sql perfomance gains that can be achieved from this setting if you have a high count of adhoc queries.

    There's really no memory pressure, other than the fact that sql uses as much as I allocate to it. Just need the ability to measure the gains if any at all.

  • The address below is a good read on the topic:

    http://sqlskills.com/BLOGS/KIMBERLY/post/Procedure-cache-and-optimizing-for-adhoc-workloads.aspx

  • I agree there can be significant gains. Imagine a system with only 3 GB available to SQL and wasting 1 GB on that cache (I doubt it's possible but let's assume). That can make a tremendous impact on that server. But with 144 GB???? Less than 3% allocated to that ressource?!?!

    And with the server smart enough to wipe out the useless / old plans. I really don't see the point in fixing something that's not broken.

    More to the point, how are you going to measure the improvement (or drop) in performance? I would expect that generalizing the statement might introduce <more> cases of BAD parameter sniffing and maybe extra cpu cycles?? I didn't read all 18+ articles in KT's blog post but that would be my #1 concern. 1 bad plan can seriously deteriorate the server's performance (I've seen a single bad plan take out the entire server, and they were not under any pressure at all).

    I would also expect not to see any real gain here unless I'd have any indication of memory / IO pressure.

    At the very least I would save a copy of the plan cache in case something goes wrong (way to trace & prove it).

    Also keep in mind that if the plan is perfectly tailored to the parameters sent with the query, you're <almost> garanteed to have the most optimal plan every time (assuming the optimization doesn't time out & stats are up to date).

  • Ninja's_RGR'us (9/20/2011)


    I agree there can be significant gains. Imagine a system with only 3 GB available to SQL and wasting 1 GB on that cache (I doubt it's possible but let's assume). That can make a tremendous impact on that server. But with 144 GB???? Less than 3% allocated to that ressource?!?!

    And with the server smart enough to wipe out the useless / old plans. I really don't see the point in fixing something that's not broken.

    Valid point - that's why I prefer to consult with this forum/experts before making changes. It really depends on usage, setup, hardware, etc.

    Appreciate the input!

  • If yout have any input into how the application is coded, you should also look into why there is such a high percentage of single-use plans and if you can do anything to reduce that.

    You might also want to look at what I posted on this thread:

    http://qa.sqlservercentral.com/Forums/Topic1127559-1550-1.aspx

  • I do not think enabling the option will cause issues. I heave read Glen's article once in SSC stating the same. Do not have the link handy.

    However, if there are many single plans, it might be due to adhoc queries, why cant we have think of parameterizing the sql statement using sp_executeSQL wherever possible and look at the numbers and size of the same?

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply