• Ok, this post probably won't be as good as the one that just got lost into cyberspace, but I'll try to sum up the main points:

    I typically attack this issue in one of the following ways:

    1.  The "archive table" approach, as you mentioned.  Basically, you create insert,update, and delete triggers on your inventory table and then populate your "archive" table as all inserts.  Then the "archive table" includes includes a column to determine whether the action was insert,update, or delete.  This is all pretty standard stuff.  However, I would also recommend that you at least consider adding the following columns to the table as well:

    *  APP_NAME():  This will tell you whether the action originated from SQL Query Analyzer, some web form, a custom application,etc.  Of course, the application must set the APP in the connection string, but it is a more common thing done these days, and I've found that many applications will do this.

    * NT/SQL User name:  Dependent on how you set security up in SQL Server, this can become a very handy column to tell you who did it.  If you don't use the granularity in SQL server, try getting your application username (independent of SQL Server).

    * DATETIME:  obviously

    I would consider very carefully the architecture if you utilize this solution.  First, be very careful how you architect the indexing strategy.  Since you will have 100% inserts into the table, and you will be doing some intense select operations on the table, and because the table will get very big, you will be running into a catch-22 with respect to performance in the database.

    I would also carefully consider where this table needs to live.  It might be worth it to store this table on another server or database (for various reasons).  This might require some dynamic SQL which is not as desirable.  Or you could use an intermediate table and DTS the results to another server.  Just things to consider in your design...

    2.  You may also consider using one-way transactional replication.  There's lots of benefits to going with this approach, including reduced contention from the aforementioned indexing catch22 and probably improved performance if you use a subscriber that is different than your publisher.  However the downfalls to this approach include:  a) Learning curve.  If you haven't used transactional replication before, there's a little to understand before you just "turn it on". b) Since you will be doing a little more "working" with the data as it moves from source to destination, you might need to consider using stored procedure replication or stored procedure invocation through replication, to do all your custom stuff on the subscriber end.

    Just a couple of things to consider...