Performance issue with SQL Server 2000

  • I have a 3-4gb database, and within that database are 3 very large tables, containing several million records, and the tables themselves have around 50+ columns each. (I didn't design it, and I can't change it).

    There are a number of applications that access the database, including a few external devices which connect to the database through applications set up as services. When there is a time-out in communication between devices and the services, they sound an audible alarm.

    The problem I am having, is that when these services are attempting to write information to one of these large tables, and it takes too long, they stop communicating with the devices, and when it gets really bad, the services stop completely and have to be manually restarted.

    This usally happens when another application is accessing a sizeable amount of data from one of those 3 tables. And that happens several times per day, we get the audible alarm, and occasionally have to restart the services for the devices.

    I took the database server down and re-indexed the tables, and that made things even worse for about a month and a half, and then things got better and better and better, etc...

    However, we also clean out old data in the tables, currently there is only 6 months worth of data in the tables, and I would like to be able to handle more. Just as a precaution, I deleted down to 4 months, and this has made the problem worse.

    After re-indexing, when things started getting better, only very seldom did we here the audible alarm for even 1 second before communication was re-established and we were back up to 7 months of data when it began happening once or twice a week. Since I cut the data down to 4 months worth, the audible alarm is back to happening several times daily, although the services are luckily not stopping yet.

    Any idea why lowering the amount of data in those tables would cause applications to take longer to access the data? And is there a solution? Or a way in which I can avoid deleting old data out of the database? I'd really like to keep a year.

  • after you cut down the data did you update statistics ?

    might be that the query optimiser isn't picking the best plan for you due to out of date stats

    i'd also suggest some of the following to cut down on readers blocking writers

    consider using read uncommitted isolation level

    use WITH (NOLOCK) hints on reads where possible

    MVDBA

  • consider using read uncommitted isolation level

    use WITH (NOLOCK) hints on reads where possible

    Can't if that would require modifying the applications, no stored procedures are used for the applications, and they were developed by someone else.

    Also, when you go from 6million records to 5million records, I wouldn't think that any query optimization would actually change, it's still a huge table. The whole thing was poorly designed. I'm no MS SQL Expert myself, and I can't say I could do much better, but I wouldn't try either.

    And one last question, would updating statistics on a table cause any issues with availability? This is a VERY high availability server, with a backup (using Double Take) and a second backup setup for manual failover. But I don't take down the primary server unless it's absolutely 100% neccessary because even having the system down for the 2 minutes it takes to get the failover up and running and everyone logged back in, is a bad thing.

  • If you don't have auto update statistics on then its advisable to update statistics on a scheduled basis. My experience has been update statistics runs faster and DBCC ReIndex. And are you partitioning the data on a regular basis(Moving old data to Archive table ). It is recomended to run Update Statistics if a large amount of data in an indexed column has been added, changed, or removed. Hope this helps.

    Thanks

    Sreejith

  • Well, I just checked and auto-update is ON for every index in the relevant tables.

    At one point, I did use the index tuning wizard and it doesn't suggest to use a different index than what is being used.

    I do have to state that there are so many queries going off all under the same username, that tracing so I can determine exactly what query is used in alot of the application is difficult if not impossible. So I can't test and tune all of the relevant quieries. Unless someone can suggest a better way to determine the most time consuming and problematic queries... I'm not familiar enough with using traces to do more than a basic trace, I can filter some things out, but I'm not sure what all of the filter options are for etc...

    I do know that the most problematic queries are SELECTS from the 3 problem tables, so I know what type of Query and what table they are aimed at. But that's the extent of my knowledge about the individual queries.

  • See if you can put the 3 tables in different file groups across different drives. And like the other posters mentioned, do an UPDATE STATISTICS (withough full scan) and see if it helps.

    ******************
    Dinakar Nethi
    Life is short. Enjoy it.
    ******************

  • To find out the most timeconsuming queries you can setup a trace, save to a table and then do a SELECT from the trace table ORDER BY duration DESC.

    Do you have any clustered index on the big tables? Are they partitioned?

    Can you modify any of the queries that are currently beeing executed in the DB?

  • There's often a lag in stats update with large tables, even if autoupdate is on. I've seen it a few times on my 50+ million row tables.

    Do an update stats, check your index fragmentation.

    If you still have problems, post your tables and index structure and queries and we can take a look.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Just an idea:

    If you have a large table, with a Clustered index on anything but an identity column, performance can really suffer.  This is especially true when INSERT statements are involved. 

    If you have an identity column, make that your primary key, and make it Clustered. 

    If you don't have an identity column to work with, then make sure none of the table's indices are Clustered and see if that helps.

  • The index and stats suggestions are all good ideas but you original post says the applicaitons are writing data. Removing data hasn't help much either so I'm guessing reading data isn't the problem.

    Have you looked for blocking or to see if you are I/O bound?

    Just a thought.

  • I'd love to know where the amazing advice about only having a clustered index on an identity comes from !!! God help us!!

    I would hope that each table has a clustered index and that you don't have any silly fill factors set?

    Do you actually rebuild your indexes? Updating stats is good but there is also a need to physically rebuild indexes. Without a clustered index you can't remove leaf level fragmentation from the tables. Next hopefully you don't have auto shrink, auto close on and you're not issuing shrink commands against your mdf file? so, after major data chnages you should rebuild indexes, update stats and for good measure issue a dbcc updateusage on the database. Make sure you don't have a bad case of ntfs fragmentation either.

    You can use profiler to capture your sql statements hitting the database , then save the trace to a table and sort/group by sql statement to find the most often run commands .. or get a demo of Terratrax or diagnostic manager and let the tool do the work for you ( and then maybe buy one of them ) I'd really suggest if you're stuck to get a tool eval and see what what it can show you.

     

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • If the writes are causing page splits the recreate cluster index will 60-70% fill factor. This way u can reduce page splits.

    thks,

    Sats

  • Well, the devices have stopped buzzing at me now after several days...

    I rebuilt the indexes one time, took several hours, and even with DoubleTake, failing over/back is a huge pain so if I can avoid that at all possible I will. And it took a few weeks for the database to really start acting like it had a performance gain. My guess is that it had to rebuild statistics because I did not do that manually (auto-update is on).

    I beleive auto-shrink IS in fact on. No automated fill is being used.

    Now personally, I would rather not have to delete ANY data. Archiving is also a pain, but I manage. Our server is 2ghtz with 2GB of ram, and 160GB hard drive. The only things running on it is our primary DB application, MSSQL Server, and Symantec Corporate edition.

    All of the large tables have clustered indexes, and I've also added a couple of Indexes myself on what seemed to be some problematic columns that were constantly showing up in where clauses (Date/time mostly). It at least seemed to speed things up at the time.

    The 3 tables in question are constantly the target of Insert/Select statements, updates are very rare, and deletes only happen when I do a delete to cut down on the size of the table to help performance.

    If it's any help, this is an alarm monitoring central station, and the 3 tables causing issues are the ones that collect all the incoming alarm data from alarm panels. The alarm receivers are constantly writing data to the tables, the alarm processing application is constantly pulling the same information out of the tables and displaying it on the screen, and then inserting new data regarding how the alarm was handled. (Insert because it's inserted into a different one of the 3 tables). And then there is reporting, which has a ton of auto-generating reports that hit the tables off and on for several hours every morning.

    I would think our server would/should be able to handle a table with 10million records containing data in 50+ columns without causing an issue

  • If autoshrink is on, switch it off. Autoshrink should never be on for a production database.

    Other than the issue where you don't know when its going to shrink, and may choose a very busy time, there's also the issue that a shrink opperation may badly fragment indexes.

    Can you post the schema of the tables and the slow queries?

    I'd suggest running perfmon for a day to see if you have any hardware bottlenecks. Check for high pages/sec (memory bottleneck), high cpu usage or high processor queue length(cpu bottleneck), high disk queue or high sec/read or high sec/write (disk bottleneck). You can also check the SQL counters buffer cache hit ratio and page life expectancy. Low values there also indicate insufficient memory

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I'd suggest your hardware is insufficient for your needs and if you're actually using a single disk you will be i/o bound without doubt!  You might want to consider turning off auto update and auto create stats and handle it yourself - believe me an auto update can seriously degrade performance if it occurs at the wrong time.

    As GilaMonster says, check your perfmon counters on your disk i/o - I'd figure that was your problem.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply