database is slow

  • I have a SQL Server 2008 R2 database. it has been very slow reading/writing according to users. I first rebuild all index, but it doesn't reduce the fragmentation level. It is a small database though, 8GB. What else should I look into? Thanks in advance.

  • well I'd say go with rebuilding all your statistics as well first,a dn then start looking at specific slow performing queries,a dn whether you can either get indexes in place to support them, or get the code rewritten to address things like SARG-ability or poorly written queries.

    if you've installed the scripts from http://ola.hallengren.com/, then you could use this as a quick shotgun approach :

    --Update modified statistics on all user databases

    EXECUTE dbo.IndexOptimize

    @databases = 'USER_DATABASES',

    @FragmentationLow = NULL,

    @FragmentationMedium = NULL,

    @FragmentationHigh = NULL,

    @UpdateStatistics = 'ALL',

    @OnlyModifiedStatistics = 'Y'

    afte thant, it's specific query time; here's an example of the top 40 longest running queries that exist in cache:

    SELECT

    TOP 40 * FROM (SELECT

    @@servername as ServerName,

    SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,

    ((CASE qs.statement_end_offset

    WHEN -1

    THEN DATALENGTH(qt.TEXT)

    ELSE qs.statement_end_offset

    END - qs.statement_start_offset)/2)+1) AS cmd,

    qp.query_plan,

    qs.last_elapsed_time/1000000 last_elapsed_time_in_Seconds,

    qs.execution_count,

    qs.total_logical_reads, qs.last_logical_reads,

    qs.total_logical_writes, qs.last_logical_writes,

    qs.total_worker_time,

    qs.last_worker_time,

    qs.total_elapsed_time/1000000 total_elapsed_time_in_S,

    qs.last_elapsed_time/1000000 last_elapsed_time_in_S,

    qs.last_execution_time

    FROM sys.dm_exec_query_stats qs

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt

    CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp

    WHERE (qs.last_elapsed_time/1000000) > 1 --five seconds

    ) x --WHERE cmd LIKE '%[t0].[EDLogID]%'

    --ORDER BY last_execution_time DESC -- CPU time

    --ORDER BY qs.total_logical_reads DESC -- logical reads

    --ORDER BY qs.total_logical_writes DESC -- logical writes

    ORDER BY last_elapsed_time_in_S DESC -- CPU time

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Or just run sp_updatestats to update the stats.

    And check for any Blocking, Memory, CPU, Disk utilization on the server. Also check for any waits on the server.

    --

    SQLBuddy

  • sqlbuddy123 (3/6/2014)


    Or just run sp_updatestats to update the stats.

    And check for any Blocking, Memory, CPU, Disk utilization on the server. Also check for any waits on the server.

    --

    SQLBuddy

    But... If they just rebuilt the indexes, the statistics are from a full scan. Running sp_updatestats uses sampling which means that the statistics it generates are likely to be less accurate than the ones there after the rebuild.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • For a quick hit, I agree with Lowell, pull information out of the cache. You can see what's going on there. For longer term though, you need to look into learning how to gather query metrics to identify the slow running queries. I'd also suggest learning how to read execution plans so you know what's causing the queries to run slow. It's actually a lot of work. My books below are specifically on this topic.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Grant Fritchey (3/6/2014)


    sqlbuddy123 (3/6/2014)


    Or just run sp_updatestats to update the stats.

    And check for any Blocking, Memory, CPU, Disk utilization on the server. Also check for any waits on the server.

    --

    SQLBuddy

    But... If they just rebuilt the indexes, the statistics are from a full scan. Running sp_updatestats uses sampling which means that the statistics it generates are likely to be less accurate than the ones there after the rebuild.

    Rebuild index doesn't update all the statistics. It updates only index statistics. Column statistics that are not part of index are not updated.

    Since we don't know the environment, I just wanted to run sp_updatestats to quickly see if it can improve the performance and to have minimal impact\run time ( in case it's prod server and a real-time issue).

    Query tuning is good but it takes time. Just the intention was if we can identify the bottleneck, fixing it would be simpler.

    --

    SQLBuddy

  • sqlbuddy123 (3/6/2014)


    Query tuning is good but it takes time. Just the intention was if we can identify the bottleneck, fixing it would be simpler.

    The quick and dirty way to find the bottleneck is sit down and pulse sys.sysprocesses filtered on the database while watching PerfMon and Task Manager - Performance and watch for the blocks while your user(s) are on the phone. When they hit a 'slow spot', find out what they were doing, what the blocks are, and go from there.

    It almost always comes down to index and query tuning however. Slow drives = too much data being pulled (usually). Suddenly slow = tipping point on an index. Intermittent = tuning to handle concurrency.

    Fragmentation and statistics can play a part, but if he's not familiar with basic troubleshooting techniques on a smaller database, I would highly doubt he's put column statistics into play that aren't index based. You go with the most likely causes until you rule them out, and running that before an index rebuild is fine. After hurts more than it helps.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • sqlbuddy123 (3/6/2014)


    Rebuild index doesn't update all the statistics. It updates only index statistics. Column statistics that are not part of index are not updated.

    True, but we know they have indexes (good or not, used or not, different discussion) and indexes that they recently rebuilt, so we'd be going after those statistics.

    Since we don't know the environment, I just wanted to run sp_updatestats to quickly see if it can improve the performance and to have minimal impact\run time ( in case it's prod server and a real-time issue).

    As you say, we don't know the environment, so better to take small, targeted steps rather than shotgun the entire thing with sp_updatestats which will update every statistic on the database that has a rowmodctr <> 0.

    Query tuning is good but it takes time. Just the intention was if we can identify the bottleneck, fixing it would be simpler.

    --

    SQLBuddy

    But, updating every statistic in the system doesn't identify the bottleneck. I mean, I wouldn't be at all shocked if they don't manually update statistics, few people do and they should. But since the user is going for some relatively targeted issues (assuming this since they're talking about rebuilding an index, as opposed to all indexes), I'd prefer a more targeted approach from a "do no harm" stand point. That's why I think Lowell's on the right track in the short term.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • The quick and dirty way to find the bottleneck is sit down and pulse sys.sysprocesses filtered on the database while watching PerfMon and Task Manager - Performance and watch for the blocks while your user(s) are on the phone. When they hit a 'slow spot', find out what they were doing, what the blocks are, and go from there.

    It almost always comes down to index and query tuning however. Slow drives = too much data being pulled (usually). Suddenly slow = tipping point on an index. Intermittent = tuning to handle concurrency.

    Fragmentation and statistics can play a part, but if he's not familiar with basic troubleshooting techniques on a smaller database, I would highly doubt he's put column statistics into play that aren't index based. You go with the most likely causes until you rule them out, and running that before an index rebuild is fine. After hurts more than it helps.

    By this statement " Just the intention was if we can identify the bottleneck, fixing it would be simpler. " I meant this

    And check for any Blocking, Memory, CPU, Disk utilization on the server. Also check for any waits on the server.

    When someone comes and says DB is slow, I would start with above mentioned ones ( at high -level) , ruling out things and then reach to individual queries. I would not start with " Query Tuning ".

    On the other hand if some one complaints about a long running query\queries, then I start tuning the query tuning right away.

    --

    SQLBuddy

  • But, updating every statistic in the system doesn't identify the bottleneck. I mean, I wouldn't be at all shocked if they don't manually update statistics, few people do and they should. But since the user is going for some relatively targeted issues (assuming this since they're talking about rebuilding an index, as opposed to all indexes), I'd prefer a more targeted approach from a "do no harm" stand point. That's why I think Lowell's on the right track in the short term.

    I think he is referring to all indexes.

    I first rebuild all index, but it doesn't reduce the fragmentation level. It is a small database though, 8GB. What else should I look into? Thanks in advance.

    I know Update Stats is not a permanent fix. That's just trying to see it can alleviate the problem at hand. Because we can't run Index Rebuilds during the day on the prod server and sometimes even during the weekdays. And adding indexes too has to go through the approval process and testing through the SDLC environments before it goes into production.

    I was just presenting my point of view. I know you are all Great Experts and I respect all your views.

    --

    SQLBuddy

  • I have index rebuild twice a week and statistics update 4 times a week. Looks like the high CPU usage slow down the database access, but how come i find out what causes CPU usage? Thanks. This is SQL Server 2008 R2 sever and hosts many databases.

  • It goes back to what has already been said. Look to query the cache to see which queries in cache are using the most CPU and/or taking the longest. Or, for longer term stuff, start gathering metrics on query performance using extended events or trace events. Those can also be grouped by CPU so you can tell which ones are using the most and/or taking the longest.

    After you identify which query or queries are using the resources, you need to figure out why that query is doing that. That involves looking at the T-SQL code and the execution plan to understand the choices being made by the optimizer so you can know what to do from there.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Also check if Parallelism is happening on the server ? What are your MAXDOP settings ..

    Best Regards,

    SQLBuddy

  • How should I check if Parallelism is happening? Thanks.

  • Grace09 (3/11/2014)


    How should I check if Parallelism is happening? Thanks.

    Check if Parallelism is enabled .. Rt Click on SQL Server --> Properties --> Advanced --> Max Degree of Parallelism

    Use this to check it ..

    SELECT

    r.session_id,

    r.request_id,

    MAX(ISNULL(exec_context_id, 0)) as number_of_workers,

    r.sql_handle,

    r.statement_start_offset,

    r.statement_end_offset,

    r.plan_handle

    FROM

    sys.dm_exec_requests r

    INNER JOIN sys.dm_os_tasks t on r.session_id = t.session_id

    INNER JOIN sys.dm_exec_sessions s on r.session_id = s.session_id

    WHERE

    s.is_user_process = 0x1

    GROUP BY

    r.session_id, r.request_id,

    r.sql_handle, r.plan_handle,

    r.statement_start_offset, r.statement_end_offset

    HAVING MAX(ISNULL(exec_context_id, 0)) > 0

    --

    SQLBuddy

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply