Prep for large increase in usage

  • Hi all,

    We have a big ol' increase in users coming in these next few months, so far our systems can handle the numbers right now fine, but I'm thinking of taking some metrics ie CPU/Memory/Disk, doubling that, then extrapolating.

    However, I know capacity planning is rarely linear, any gotchas I oughtta know about?

    Peace out guys!!

  • If you can, I'd suggest capturing the current system behavior and then using Distributed Replay to simulate your load. It'll take some work, but you should be able to at least fake out the increase you expect to see. You'll have a much better idea on just how the additional users are going to affect the system.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Or... get 3 envelopes. 😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Distributed Relay isn't available on 2k8 I'm advocating a move to 2012 or possibly 2014, but I want to capacity-plan this BEFORE we spend our budgets (which are big, but of course we want value) for new hardware.

    I was thinking meanwhile more of taking the usual stats (transactions/sec, memory pages/sec etc, even RAM used, and then multiplying by the increase in users/connections; the trouble is I'm gettin' the feeling it's not gonna be a linear progression, hopefully someone here has done a big upscale and can help what to look out for.

  • It's not usually a linear progression. I did a scale-up test for a client last year. As load increased, the CPU increased linearly up to about 2.5x base load, at that point the throughput dropped, response time rose and CPU flattened out.

    You can do the replay with Profiler, it's just a bit more of a pain. Take a workload, replay it against a test server. Replay it from two machines against the test server, etc. It's not perfect, but it'll be better than guessing.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • JaybeeSQL (8/5/2015)


    Distributed Relay isn't available on 2k8 I'm advocating a move to 2012 or possibly 2014, but I want to capacity-plan this BEFORE we spend our budgets (which are big, but of course we want value) for new hardware.

    I was thinking meanwhile more of taking the usual stats (transactions/sec, memory pages/sec etc, even RAM used, and then multiplying by the increase in users/connections; the trouble is I'm gettin' the feeling it's not gonna be a linear progression, hopefully someone here has done a big upscale and can help what to look out for.

    You can set up 2012 as the Replay manager using Developer Edition, but, point it at 2008 systems. It'll work.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • GilaMonster (8/5/2015)


    It's not usually a linear progression. I did a scale-up test for a client last year. As load increased, the CPU increased linearly up to about 2.5x base load, at that point the throughput dropped, response time rose and CPU flattened out.

    You can do the replay with Profiler, it's just a bit more of a pain. Take a workload, replay it against a test server. Replay it from two machines against the test server, etc. It's not perfect, but it'll be better than guessing.

    Hi Gail,

    If we can't get hold of a Dev edition, I may just have to resort to using Profiler. Hells-bells, I may not even get a test SERVER (don't ask, politics), so worst case I'll have to guesstimate capacity.

    Thanks for the explanation (run X traces simultaneously, increasing X until fall-over). Does Distributed Replay work similarly?

  • JaybeeSQL (8/6/2015)


    Does Distributed Replay work similarly?

    Yes, you can use multiple controllers (from multiple client machines) to scale the workload up. You can use the Stress option of distributed replay as well, but the problem with that is you don't have an idea what scale that corresponds to.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/6/2015)


    JaybeeSQL (8/6/2015)


    Does Distributed Replay work similarly?

    Yes, you can use multiple controllers (from multiple client machines) to scale the workload up. You can use the Stress option of distributed replay as well, but the problem with that is you don't have an idea what scale that corresponds to.

    Just had a thought...we use Redgate tools - and we're licensed for SQL Monitor - anything there better than Dist Replay?

  • Grant, who works for Redgate, recommends distributed replay. 🙂

    Replay isn't to monitor the effects of the load, it's to generate that load against a test server. You could use SQLMonitor to watch the effect of the load test

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/6/2015)


    Grant, who works for Redgate, recommends distributed replay. 🙂

    Hee-hee!!

    Replay isn't to monitor the effects of the load, it's to generate that load against a test server. You could use SQLMonitor to watch the effect of the load test

    That was understood about Replay, but yeah, I'll certainly use Monitor to...monitor!

  • Dupe

  • GilaMonster (8/6/2015)


    Grant, who works for Redgate, recommends distributed replay. 🙂

    Replay isn't to monitor the effects of the load, it's to generate that load against a test server. You could use SQLMonitor to watch the effect of the load test

    Ha! Yeah, I wish I was getting money for recommending Replay, but I don't. Too bad.

    The one tool we do have that might help some is Data Generator. You can use that to add additional data to your tables to simulate increased size. But that won't tell you much about how the increased number of users will affect CPU, memory or I/O.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • GilaMonster (8/6/2015)


    Grant, who works for Redgate, recommends distributed replay. 🙂

    Replay isn't to monitor the effects of the load, it's to generate that load against a test server. You could use SQLMonitor to watch the effect of the load test

    Gail, in your experience which is the method of capturing the prod-server workload that has the least overhead on the prod server? Bear in mind we use 2008. We can use SQL Monitor to capture 'Extended Events' during testing/replay, so we're clear on that side, but lightweight capture?

  • JaybeeSQL (8/7/2015)


    GilaMonster (8/6/2015)


    Grant, who works for Redgate, recommends distributed replay. 🙂

    Replay isn't to monitor the effects of the load, it's to generate that load against a test server. You could use SQLMonitor to watch the effect of the load test

    Gail, in your experience which is the method of capturing the prod-server workload that has the least overhead on the prod server? Bear in mind we use 2008. We can use SQL Monitor to capture 'Extended Events' during testing/replay, so we're clear on that side, but lightweight capture?

    On 2008, if you're going to use Profiler to do the replay, then the most lightweight method is to use T-SQL to define Trace events to capture the query metrics. You can use Profiler to create the scripts. Just don't run the Profiler gui against your production system to do the data capture itself. It has very serious performance impact.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply