does log size has any impact on performane of database

  • Hi,

    I am using Sql Server 2005 and working on ASP.NET with C#.

    We insert nearly 10 lakh records per day , Of course we transfer last two days record to suitable tables every day.

    Hence per day we have 20 lakh records in hand.

    So while retrieving or inserting or modification takes a long time.

    But if we create a fresh table and start inserting it works fine, once again over a period of time, it weakens our performance.

    Because new table and new log.

    Kindly let me know what should be done to increase the performance of our SQL SERVER.

    We want to optimise the performance of our database.

    Regards

    Hema

  • Bulk insert or BCP are the fastest way to get 1 million (10 lakh) records into the database with minimal logging.


    N 56°04'39.16"
    E 12°55'05.25"

  • I think I have not framed my question properly,

    We do bulkinsert only, but now while checking reports from our web application, the time taken is more, or if we write up a query select top 100 * from emp, it takes long amount of time.

    But what we did is renamed the table and created a new table of the same name emp.

    And all the selection and updation was tremendously fast.

    But we cant do this forever, Any other solutions

    Regards

    Hema

  • Did u check the condition of your indexes. Are they fragmented?

    "Keep Trying"

  • if you insert 20 lakh (2 million) records, it can be faster to drop existing indexes before insert, and recreate the indexes after insert.


    N 56°04'39.16"
    E 12°55'05.25"

  • I would like to clarify my question once again,

    We have a plenty(around 20 lakhs) of records getting inserted into the table and also we move the records from the database to backupdatabase everynight(using SSIS). We are inserting through bulk copy with a batch size as 100. After some long days (roughly 10-15) the processing becomes slow though we move the records. We thought it could be a problem with the log hence we shrink the log file but issue remains the same ................................

    Any help pls

    Regards

    Hema

  • check your maintenance plans have they run sucessfully? does your maintence plan include rebuilding of indexs? when you do big inserts or deletes you will need to re-organise data within the table, which is re-building of your indexes, which will make querying your tables much faster.

    You should shrink your ldf file(log file)!

    Is your log file limited to growth or is it unlimited, should be unlimited to be on the safer side, however your maintence should include shrinking of your ldf and mdf files. Maintence should generally be carried out on a weekly bases.

    Is your ldf and mdf files on separate drives? If not you should change that make I/O faster.

    Your Temp DB should also be located on a separate drive from other databases. Maybe your c: drive.

  • Thanks all for the reply.

    Infact we dont have any indexes at all, Secondly both the files ldf and mdf are in the same drive.

    In such circumstances what should we do?

  • No indexes :w00t: well theres your problem right there, firstly create indexes on all your tables.

    Do you know how to ?

    How many drives does your server have?

  • Neel thanks for the reply but, I cant create indexes, otherwise it would be too slow,

    Moreover I will have to delete and create indexes whenever I have to do modifications.

    There is something however, I would like to make a point.

    I did delete the log file, and then the performance did not improve then also.

    So created a similar table and then it workd properly.

    Also if I restart the sql server service, it works ..

    Any other suggestions besides creating indexes..

  • What version of SQL are you using? Also is the data you are importing the same/fixed i.e. does the structure of the data change? If not have you thought about using SSIS as you can then automate the load at a time when your DB isn't being used as heavily.

  • How long does creating an index on 2 million rows take you? It would seem that trying to query from a 2 million row unorganized heap would almost always be slow, but I agree with the others here that you should at least consider adding a non-clustered index after your data load.

    If you're looking to speed up the data load itself, you might want to consider having the transaction log file on a separate disk than the data file.

  • First of all many thanks for the reply

    I am inserting bulk records every minute, infact even less than that.

    I wont be inserting million records per minute but i am inserting thousands of records.

    So creating indexes everytime and disabling them and enbaling again will itself create a huge load on the server.

    Any other option besides it.

  • OK, so we have new information, I didn't realize before you were doing bulk loads every minute. How are your disks setup, RAID level etc. How is your I/O performing? There could be an I/O problem preventing the system from having good performance. Do you ever delete records from this database? You mentioned before that you only have last 2 days of data. Deletes could become very slow on this kind of setup.

  • Have you considered partitioned tables? Depending on what you're inserting it may be possible for you to create a new table, insert into that and then switch that table with an empty partition. That switch is a metadata operation, so it's fast.

    It means that your inserts won't affect the queries that are occurring, the table that users are querying can be indexed and, because you'd always be inserting into an empty table, it will qualify as a bulk operation, hence minimal logging if you're in bulk-logged recovery.

    If you could give a bit more info on what kind of data you're loading every few minutes, and the structure or the table in question, we could see if this is appropriate for partitioning.

    Partitioning requires enterprise edition of SQL.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply