slow performance

  • Hello All,

    I have 14Gb SQL 2005 database with some of the tables having around 6 -10 million records, accessing these tables has become very slow. What are the ways to find out what is slowing down the performance. I have indexes, they have been reindexed, the statistics are updated, One option that is turned on and I don't know if this affecting the database is the row level snapshot.

    Can some one help

    Thanks

    Shri

  • Did it get slow overnight or its been slow eversince it was created..

    There are many aspects that can be a cause of the slowness..

    Is DML slow or basic read operations are slow?

    The fragmentation on the disk holding the data and log files can also be the issue, but is ignored most of the times.

    Can you post in detail the server specs and the type of operations that are slow...

    Thanks!!

    The_SQL_DBA
    MCTS

    "Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."

  • You can create a new filegorup and assign it to the table access is slower.

    Got all the files Data and Log located on the same disk?

  • On top of the above questions can you post your queries that you are using to access your tables? any filters? How are you accessing your tables through stored procedures/ad hoc queries?

    Can you include the execution plans and analyze them?

  • Yes all the files and logs are on the disk, how do I create a filegroup and move just a few tables to this file group. Also will I have to change the code to access these tables. (All the code is coming from a Java application)

    Here is the query(SP)

    ALTER PROCEDURE [dbo].[testqueryproc]

    -- Add the parameters for the stored procedure here

    @p0 as datetime,

    @p1 as datetime,

    @p2 as datetime,

    @childid as bigint,

    @type as int,

    @syear as int,

    @period as int,

    @catseq as int,

    @itemno as varchar

    AS

    BEGIN

    select coritems2_.INSTID as INSTID27_, coritems2_.CRETS as CRETS27_, coritems2_.MODTS as MODTS27_, coritems2_.MODINSTID as MODINSTID27_, coritems2_.ITEMNO as ITEMNO27_,

    coritems2_.ITEMVALUE as ITEMVALUE27_, coritems2_.CORASSESSMENTINSTID as CORASSES7_27_, coritems2_.CATEGORYSEQUENCE as CATEGORY8_27_, coritems2_.ITEMSEQUENCE as ITEMSEQU9_27_ from

    COR_ASSESSMENTS corassessm0_ inner join ASSESSMENTS assessment1_ on corassessm0_.ASSESSMENTINSTID=assessment1_.INSTID inner join COR_ITEMS coritems2_ on

    corassessm0_.INSTID=coritems2_.CORASSESSMENTINSTID where corassessm0_.MODTS= @P0 and assessment1_.MODTS= @P1 and coritems2_.MODTS= @P2 and assessment1_.CHILDINSTID= @childid and

    assessment1_.TYPE= @type and assessment1_.SCHOOLYEAR= @syear and corassessm0_.PERIOD= @period and coritems2_.CATEGORYSEQUENCE= @catseq and coritems2_.ITEMNO= @itemno

    END

    I am sending the execution plan as a zip file.

    Any help is appreciated.

    Thank You

    Shri

  • You do not have to change the code.

    1-If you have multiple disks can put in a Log File and Data in another.

    2-Detach database

    3-Move Log to anohter disk

    4-Atach database

    Another test:

    1-Create new file to database

    2-copy tables (structure and data) to new file

    3-delete old table

    This process is long.

  • Is it only the posted query that is slow, or all queries?

    Are you able to post the Actual Execution plan rather than the estimated one?

  • yes it is just this one query, I am attaching the actual exec plan. When I run this in SSMS it is not so bad. but when I run it through the application and use the profiler, it shows that it is very slow. I will attach the profiler output too, it is helpful.

    Thanks

    Shri

  • If indexes are defragged and statistics are up to date then try passing the profiler workload file to DTA and see what advisor says. Is this happening at any specific time or you find this code slow whenever you execute it regardless of time?

    MJ

  • specific procedure is slow...but runs fast if you run the statement in SSMS?

    sounds like parameter sniffing to me.

    when the code is compiled into a procedure, the compiler makes a best guess on the parameters when it makes an exececution plan....when that execution plan is way off, you get the performance whack you are seeing.

    As i remember, there are two ways to fix this: force the procedure to recompile every time,

    ie CREATE PROCEDURE WHATEVER () WITH RECOMPILE

    or reassign the passed in parameters to local parameters inside the proc, and have the proc use just the local parameters.

    lame example:

    ALTER PROCEDURE [dbo].[testqueryproc]

    -- Add the parameters for the stored procedure here

    @pp0 as datetime,

    @pp1 as datetime,

    @pp2 as datetime,

    @pchildid as bigint,

    @ptype as int,

    @psyear as int,

    @pperiod as int,

    @pcatseq as int,

    @pitemno as varchar

    AS

    BEGIN

    DECLARE @p0 as datetime,

    @p1 as datetime,

    @p2 as datetime,

    @childid as bigint,

    @type as int,

    @syear as int,

    @period as int,

    @catseq as int,

    @itemno as varchar

    --assign to local variables to prevent the parameter sniffing

    SET @p0 = @pp0

    SET @p1 = @pp1

    SET @p2 = @pp2

    SET @childid = @pchildid

    SET @type = @ptype

    SET @syear = @psyear

    SET @period = @pperiod

    SET @catseq = @pcatseq

    SET @itemno = @pitemno

    ....

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • What does parameter sniffing mean .?

    Could it really turn performance way down and your method is preventing the performance fall back.?

    Then should not every stored procedure be written that way.?

    I have lately created one with the help of Chris M. that uses a Case statement in the Where clause and he warned me already that performance "could" fall back, but did not explain why that could happen,

    Are we talking about the same issue here.?

    Tnx in advance for any feedback,

    Wkr

    Eddy

  • Did you check execution plans to see the data access is efficient. Make sure you are not doing scans or bookmark/key lookups on the table.

    It also depends on how much of the SQL Coding guidelines have you followed while writing the code.

    Thanksm

    Amol

    Amol Naik

  • Here's a detailed explanation i saved from an MS blog a while back:

    quote:

    --------------------------------------------------------------------------------

    The reason for the performance difference stems from a feature called

    "parameter sniffing". Consider a stored proc defined as follows:

    CREATE PROC proc1 @p1 int AS

    SELECT * FROM table1 WHERE c1 = @p1

    GO

    Keep in mind that the server has to compile a complete execution plan for

    the proc before the proc begins to execute. In 6.5, at compile time SQL

    didn't know what the value of @p1 was, so it had to make a lot of guesses

    when compiling a plan. Suppose all of the actual parameter values for

    "@p1 int" that a user ever passed into this stored proc were unique

    integers that were greater than 0, but suppose 40% of the [c1] values in

    [table1] were, in fact, 0. SQL would use the average density of the

    column to estimate the number of rows that this predicate would return;

    this would be an overestimate, and SQL would might choose a table scan

    over an index seek based on the rowcount estimates. A table scan would

    be the best plan if the parameter value was 0, but unfortunately it

    happens that users will never or rarely pass @p1=0, so performance of the

    stored proc for more typical parameters suffers.

    In SQL 7.0 or 2000, suppose you executed this proc for the first time

    (when the sp plan is not in cache) with the command "EXEC proc1 @p1 =

    10". Parameter sniffing allows SQL to insert the known value of

    parameter @p1 into the query at compile time before a plan for the query

    is generated. Because SQL knows that the value of @p1 is not 0, it can

    compile a plan that is tailored to the class of parameters that is

    actually passed into the proc, so for example it might select an index

    seek instead of a table scan based on the smaller estimated rowcount --

    this is a good thing if most of the time 0 is not the value passed as

    @p1. Generally speaking, this feature allows more efficient stored proc

    execution plans, but a key requirement for everything to work as expected

    is that the parameter values used for compilation be "typical".

    In your case, the problem is that you have default NULL values for your

    parameters ("@Today DATETIME = NULL, ...") that are not typical because

    the parameter values are changed inside the stored proc before they are

    used -- as a result NULL will never actually be used to search the

    column. If the first execution of this stored proc doesn't pass in an

    explicit value for the @Today parameter, SQL believes that its value will

    be NULL. When SQL compiles the plan for this sp it substitutes NULL for

    each occurrence of @Today that is embedded within a query.

    Unfortunately, after execution begins the first thing the stored proc

    does is change @Today to a non-NULL value if it is found to be NULL, but

    unfortunately SQL doesn't know about this at compile time. Because NULL

    is a very atypical parameter value, the plan that SQL generates may not

    be a good one for the new value of the parameter that is assigned at

    execution time.

    So, the bottom line is that if you assign defaults to your sp parameters

    and later use those same parameters in a query, the defaults should be

    "typical" because they will be used during plan generation. If you must

    use defaults and business logic dictates that they be atypical (as may be

    the case here if app modifications are not an option), there are two

    possible solutions if you determine that the substitution of atypical

    parameter values is causing bad plans:

    1. "Disable" parameter sniffing by using local DECLARE'd variables that

    you SET equal to the parameters inside the stored proc, and use the local

    variables instead of the offending parameters in the queries. This is the

    solution that you found yourself. SQL can't use parameter sniffing in

    this case so it must make some guesses, but in this case the guess based

    on average column density is better than the plan based on a specific but

    "wrong" parameter value (NULL).

    2. Nest the affected queries somehow so that they run within a different

    context that will require a distinct execution plan. There are several

    possibilities here. for example:

    a. Put the affected queries in a different "child" stored proc. If

    you execute that stored proc within this one *after* the parameter @Today

    has been changed to its final value, parameter sniffing will suddenly

    become your friend because the value SQL uses to compile the queries

    inside the child stored proc is the actual value that will be used in the

    query.

    b. Use sp_executesql to execute the affected queries. The plan won't

    be generated until the sp_executesql stmt actually runs, which is of

    course after the parameter values have been changed.

    c. Use dynamic SQL ("EXEC (@sql)") to execute the affected queries.

    An equivalent approach would be to put the query in a child stored proc

    just like 2.a, but execute it within the parent proc with EXEC WITH

    RECOMPILE.

    Option #1 seems to have worked well for you in this case, although

    sometimes one of the options in #2 is a preferable choice. Here are some

    guidelines, although when you're dealing with something as complicated as

    the query optimizer experimentation is often the best approach :

    - If you have only one "class" (defined as values that have similar

    density in the table) of actual parameter value that is used within a

    query (even if there are other classes of data in the base table that are

    never or rarely searched on), 2.a. or 2.b is probably the best option.

    This is because these options permit the actual parameter values to be

    used during compilation which should result in the most efficient query

    plan for that class of parameter.

    - If you have multiple "classes" of parameter value (for example, for

    the column being searched, half the table data is NULL, the other half

    are unique integers, and you may do searches on either class), 2.c can be

    effective. The downside is that a new plan for the query must be

    compiled on each execution, but the upside is that the plan will always

    be tailored to the parameter value being used for that particular

    execution. This is best when there is no single execution plan that

    provides acceptable execution time for all classes of parameters.

    HTH -

    Bart

    ------------

    Bart Duncan

    Microsoft SQL Server Support

    --------------------------------------------------------------------------------

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • shri_sastry (2/5/2009)


    yes it is just this one query, I am attaching the actual exec plan. When I run this in SSMS it is not so bad. but when I run it through the application and use the profiler, it shows that it is very slow. I will attach the profiler output too, it is helpful.

    Thanks

    Shri

    If a big number of records are returned from the SQL statement, the "Index Seek" (as shown in the execution plan) is very slow. You might want to add hints to force to use hash join/merge join, some times, even "full table" scan. From my own experience, if an SQL statement returns more than 10% records of a table, full table scan is faster than "indexes".

    A SQL statement might run faster if you run it outside of the procedure. Very likely, it is due to different execution plans being used in the two cases.

    http://www.speedydb.com

    Charles Zhang

  • If Parameter sniffing is the real problem then go through following link and do understand it...

    it will help u to take out of it...

    http://omnibuzz-sql.blogspot.com/2006/11/parameter-sniffing-stored-procedures.html

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply