Databases with no transaction logs

  • Hello. This is my question: Is it possible this feature in SQL 2000? - Of course, i'm talking about development, testing environment or a DB for massive data loads issues.

    Thanks a lot for any answer or comment.

    Jorge

  • All changes (inserts/updates/deletes) are logged in SQL 2K.  However, you can change the recovery model so that you can keep the max size of you transaction log down.

    Book online says the following about simple recovery :

    • All data modifications made since last backup are expendable, or can be redone. Lowest logging overhead, but cannot recover past the end of the last backup

    What this really means is that each transaction is still logged but once the transaction is committed, the txn log file entries for that transaction are not retained in the log file. 

     

     

  • If you did not have a logging mechanism, you would not have a rollback mechanism.

    If your logging mechanism was limited to the memory on your box, you would be limited by the number of uncommitted updates you could have before things started crashing.

    The transaction log provides the mechanism that allows unwanted updates to be rolled back, regardless of the size and complexity of your workload.

    Do you really want to do without all of that?  If so, simply write to a sequential file.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • We have a lot of bulk inserted databases. We set most of them to just SIMPLE recover mode. SQL Server still writes out some information to the transaction log, because an each transaction still needs to be committed/rolledback as a set. But once the transaction is completed, the log information is marked ok to overwrite, so the log won't grow as much.

    You can get bcp to do the insert in batches which will also see the transaction log kept down.


    Julian Kuiters
    juliankuiters.id.au

  • Thanks for your comments. Let's talk a little about our work: We are converting and loading data from a legacy System to SAP. We are talking (in some tables) about million and millions of data records. I need the capacity for query processing, programming and relacional model that a RDBMS give us. But remember that SQL Server writes a BEFORE-TYPE log. For our purposes we are looking performance rather tan integrity in the processes we have, so log processing (for our this especial case) means a big problem than affects data processing performance. I have changed our recovery model to SIMPLE, but we think it is not exactly what we need (only for INSERTs and SELECT INTO)....

    Regards,

    Jorge

  • I don't know if this might help, but make sure you're setting your batch size to... something. How this is done varies by whether you're using bcp or DTS or BULK INSERT. But they all got it.

    I suspect that you will get better performance loading 250,000 rows and then flushing the log, rather than 10 million rows and flushing the log. But that's just a guess.

    Then for a truly perverse idea, try building a RAM disk and set the transaction log to live on that. I don't know if there's something to keep SQL Server from falling for that trick, or maybe NT doesn't allow RAM disks. They were really cool back in '92, but I haven't tried to make one since then. Hey, if you could use the DOS nul: device as the location of your log... but I'm pretty sure SQL Server won't fall for that.

    - Chris


    Chris Hofland

  • mmm... RESTORE DATABASE MyDatabase FROM DISK = 'null:'

    Jorge, sounds to me that your system is going to start of at a decent size, so you should probably read up about hardware and OS optimisations for Very Large Databases (VLDB) http://www.microsoft.com/sql/techinfo/administration/2000/scalability.asp

    Even if your server is not going to be dealing in servers with gigabytes or memory, or terrabytes of data, these articles are a great foundation for understanding what supports sql server, and how to get the best out of everything.

     


    Julian Kuiters
    juliankuiters.id.au

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply