Does a large transaction log affect performance?

  • Our database is about 100 megs, and the transaction log was 9 gigs. After complains that the server was slow, i realized teh that transaction log was huge.

    Does the transaction log affect performance? The the front end is ASP and users complained that html busy or page cannot be displayed.

    thanks in advance

  • With large I take it you mean a log full of transaction, and not a log mainly consisting of empty or unused space.

    A large t-log in itself does not cause performance degration. However, a full transactionlog may be the cause of actions that can affect performance. Sooner or later the transactions are committed and will cause activity. Or there maybe a replicationprocess running or scheduled which will cause activity, and so on.

    However, a db of 100 Mb and a log of 9 Gb would surely draw my attention and cause me to investigate why that is.

    Hth.

    Greetz,
    Hans Brouwer

  • Actually, the transaction log was never truncated so it contained months of transactions. I figured that having a large transaction log would slow the system down because it had to consistantly add to it while it's so large.

  • Check your recoverymodel. I think it will be a Full recoverymodel. If you change it to simple the t-log will not fill up much, cause it will be regularly truncated. Be aware of the consequences.

    Having said that, if you have not done so think about a backup and recovery plan. Check BOL ont his topic and decide what recoverymodel best fits your ned.

    Greetz,
    Hans Brouwer

  • My understanding is that, as a transaction log is a sequential file writing into it doesn't cause performance issues.

    That said I can see performance issues arising in the following areas.

    • Disk space.  Windows seems to like a certain amount of free-space on a drive.  If this gets low performance begins to creep.
    • Automatic expansion.  If your log file is set to expand automatically at the default percentage then all of a sudden you are going to hit a demand to create an extra 900Mb of file.

    Over and above that I am not sure if the expansion of a transaction log involves creating a 9.9Gb copy of the file and then copying the existing log data into it.  If this happens then things are going to get VERY slow.

    In addition, if you have to create an ever expanding file then what would be the effect on performance if that file cannot be created as a contiguous block?

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply