Getting rid of the transaction Log

  • Hi all, I have been answering a few questions here, this is my first question back:

    Our Database(s) have no need for a transaction Log, our system is based on processing existing, historical, data. Each database is backed up nightly such that the worst that can happen is we lose a days worth of processing and have to re-process, this is acceptable.

    As such the transaction logs merely consume disk space and use up our processor resources.

    There is no way of completely removing the transaction logs that I can find (I am not surprised by this, I imagine that it is required during most processes, even if a contiguous log is not required).

    I have set the Recovery model to 'Simple' (which is the closest I can find to no log) and turned off 'automatically grow log file' - but I cannot find any advice on what I can safely then fix the log file size to.

    Anyone got any suggestions?

    Thanks.


    "Don`t try to engage my enthusiasm, I don`t have one."

    (Marvin)

  • You don't want to "Fix" the size of the transaction log unless you make it a reasonable size, even though your in simple recovery mode sql server will need the transaction log to write information to disk while transactions are in progress once the transaction is complete the information in the log will not be required. To keep the log size under control, just implement a file shrink process after backups are complete.

     

  • Thanks, sadly that is exactly the sort of thing I am trying to avoid, our app is very CPU & disk access intensive.

    I have tested a fixed log file DB compared to one set to 'grow automatically' and I got a 15% improvement.

    Is there some way of estimating a log files max size in the simple recovery model, say from the size of the tables being accessed?

     


    "Don`t try to engage my enthusiasm, I don`t have one."

    (Marvin)

  • There are a coupe of things you can do:

    1. If your database recovery model is set to FULL, your logg will keep growing. Cnahge it to SIMPLE (it does the same as truncate at checkpoint we had in previous versions)

    2. You can create a job to keep your log under control by using something like this

    backup log TempTestFINLIB WITH TRUNCATE_ONLY

    go

    DBCC SHRINKFILE ('databasename_log', 20)

    This option will keep you log to 20 MB, but you can set it up to whatever you need. Good luck and hope this helps

    Fernando

  • Sorry for the writing, sometimes my fingers go faster than my brain, specially on Fridays.

    Read logg as Log

    and

    Cnahge as Change

  • Thanks, I think the size of data any one statement in our process has to handle would make it necessary to shrink the log regularly, IE several times during each cycle. This would add considerably to the time each cycle takes and ultimately to the overall time for the entire process.

    If I could estimate what the maximum log size generated for each transaction was then I could fix the log size at that, and then not have to worry about losing time shrinking or expanding the log.

    Ta.


    "Don`t try to engage my enthusiasm, I don`t have one."

    (Marvin)

  • You have no real way of knowing what the maximum required size is. What if I come in one day, start a transaction and then just continue working all day long. As long as there is an active transaction the log cannot reuse the space before that transaction's LSN. I would set the log to as large a size as can be afforded (buy a new set of disks for it, they don't really cost a lot you know) and then run in Simple recovery model. As long as there are no very long running transactions you should be good.

  • Thanks, but I think you are assuming a type of usage that simply does not occur in our setup.

    Once we have the data uploaded (lets say its monthly, but it varies from client to client) we know how many rows of data we need to process, the number of rows output can (probably) be estimated and the process run to get the output varies little. Ideally I would like to be able to automatically set a reasonable Log size before processing starts.

    Incidentally we create a DB per client, each DB currently comes in at between 5GB and 50GB (but the size only ever increases), we also have about 20 clients now and are continuing to expand on that. The  databases are currently stored on a SCSI RAID 10 array (4 Disks). Given this, storeage space is at a premium and simply throwing more disks at it is not a reasonable solution.

    Is there some way of monitoring what happens to the Log during a complete process cycle? If so then I could monitor a small and a large DB during processing and see if there is some correlation between our Data-in size and the log required, then use the old builders estimate formula (add a bit, then double it) and use that as a log setting.


    "Don`t try to engage my enthusiasm, I don`t have one."

    (Marvin)

  • " ......... Is there some way of monitoring what happens to the Log during a complete process cycle? If so then I could monitor a small and a large DB during processing and see if there is some correlation between our Data-in size and the log required ......."

    There is no direct correlation between the size of a DB and the size of a log.  The log is generated by activity in the DB.  So, if I have a 20 meg database, but do 50,000,000 updates against it, I will still have a small DB, but my log file will get huge, because it logs all that activity.  On the other hand, I currently have a 150 Gig testing database used by a few people for testing changes before they go into production. Not much happens there, so the log file never gets very big.

    As suggested earlier, just set up a recurring job to shrink the log.  It should only take a few seconds to run.  You could run it once an hour, once a night, .........   whatever you like.  I have a few jobs that are known to create large logs in one of our "simple" databases.  I added a Shrink log step to run as the last step in the job, so that it gets shrunk right away. In another DB, I just run it every hour. If there are a lot of open transactions when the shrink runs, it may not have any affect, but it will work the next hour after the job finishes.

  • Try Log Explorer from  Lumigent Technologies.

    Jagan 

  • This command will give information about log space..

    dbcc sqlperf(logspace)..

    You will get the log size (the actual amount of space available for the log) and the log space used.It may be helpful to you..

    ..hema

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply