Max degree of Paralellism

  • Hi everyone,

    We recently experienced performance degradation with our MS CRM application.

    We monitored the DB server (which is SQL Server 2005 SP3) and noticed that the CPUs (% Processor time) were very high and that there were a lot of locking issues (blocking and deadlocking) in the CRM database.

    We finally concluded that we must change the value of Max degree of Parallelism to set it to 1 (it was set to 0 before).

    This solved most of our performance problems with the CRM application.

    After that we noticed (we did it several times) that setting back Max degree of Parallelism back to 0 was still leading to a lot of locking issues and boosting the CPU % Processor time very high.

    Yesterday however, in order to provide Microsoft with some traces, we again changed the value of Max degree of parallelism back to 0. But everything went fine : No locking and just a small increase in CPU, nothing major. We set it back to 1.

    Now, today :

    [font="Courier New"]

    ___________________________________________________________________________________

    |MaxDOP | Effect |

    |---------------------------------------------------------------------------------------------|

    | 1 | Started with 1. Everything was fine (no locking and CPU is normal) |

    | 0 | Little increase in CPU (around 5%), nothing major |

    | 1 | CPU shot up and stayed there. Locking issues returned |

    | 0 | CPU went back to same level as earlier when max degree of parallelism was at 0 |

    | 1 | CPU spiked (for a short while) but levels gradually returned close to this morning's | | |baseline |

    |_________________________________________________________________________________|

    [/font]

    We really don't understand what happened.

    Can someone shed some light on this behaviour?

    Regards.

    Carl

  • As I understand it, with MSCRM the performance white paper recommends you use MAXDOP = 1 due to how MSCRM works/performs.

    I'd advise you look up these documents as there are other improvements you can make re other known performance issues. It will also show you there recommended setups and considerations re workflow etc.

    As an extra, make sure you are updating your stats regularly as from my experience MSCRM fragments quickly so keep on top of your index maintenance aswell. Also check that the inbuilt maintenance routines aren't whats causing your performance hits in the first place.

    Oh and don't forget, when you make that config change all caches are cleared so expect a temporary hit as they repopulate.

  • Knight (5/6/2010)


    Oh and don't forget, when you make that config change all caches are cleared so expect a temporary hit as they repopulate.

    Thank's for this last tips... It may explain some of the behavior but not the fact that sometimes modying it can have a drastic impact on the performances and sometimes absolutely no impact.

    We've also implemented most of the recommandation regarding the CRM.

    Regards.

    Carl

  • Frequently, but not always, it's not that the system can't benefit from parallelism, it's that the threshold for parallelism is set to low. The default, 5 I believe, is way too low for most systems. You might try leaving the MAXDOP at 0 and setting the threshold to a higher value, 30 maybe. See how the system behaves then.

    I'm going to check with our local CRM MVP to see what she has to say about this & I'll post again.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Yep. They told me that leaving parallelism turned on (0 or some number greater than 1) and adjusting the cost threshold up is the way to go. I was told that the documentation out there that recommends turning it off was written when there were still some serious issues with the product, it was running into deadlocks, blah, blah... anyway, that's the suggestion. They said a cost of 25 would be their starting point. I said 30. I'm sure it could go either way. If you're feeling a bit paranoid, you could make it 35, but then you might not see parallelisms benefits at all.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Either way, on assessing your best fit you'd have to compare performance over a defined time period in order to really find your best fit for this setting. And after all that analysis would you really see any great difference in CRM performance? If anyones done this already let us know?

    And as I'm about to start a new job next week where they primarily use MSCRM on SQL 2008, I'd be interested on hearing any other comments on best practices etc re this product.

  • We openned a case with Microsoft Dynamic CRM support in the past weeks and the guy said to me that, with SQL Server 2008, leaving Max Degree of Paralellism to 0 is optimal.

    I don't know if its true but it would make sense leaving it to 0 (its default value) would deliver the best performances on most cases.

  • Thank's a lot Grant.

    We will investigate on this treshold with our load of work.

    Best regars.

    Carl

  • Hi Grant,

    This morning we set the treshold to 25. Rigth after that, locking and performance problem arose. We waited but the situation did not tend to return to "normal".

    We than change the treshold's value to set it to 30. We had some processor peeks but now everything seems to run well (without locking and performance issues).:-)

    We will closely monitor everything and let you now of any important change.

    Regards.

    Carl

  • Carl B. (5/7/2010)


    Hi Grant,

    This morning we set the treshold to 25. Rigth after that, locking and performance problem arose. We waited but the situation did not tend to return to "normal".

    We than change the treshold's value to set it to 30. We had some processor peeks but now everything seems to run well (without locking and performance issues).:-)

    We will closely monitor everything and let you now of any important change.

    Regards.

    Carl

    Excellent. It sucks that I have to tell my friend that their number worked better, but that's excellent. Thanks for reporting back.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply