Need help implementing FILO ordering on fixed size database

  • Hello, I am new here and very new to SQL so please bare with me.  I just passed MS 70-229 exam last week.

    I want to create a fixed-size database containing 5 tables.  If an attempt to insert a new row into any table fails, I want to delete the last (oldest) row in that table, and insert the new data again.

    This way I have a First In, FIRST Out ordering.  (title is wrong, should read FIFO)

    What is the best way to do this?  I thought about a stored procedure that would try the insert, check for error condition, and then delete the oldest row, and re-insert..

    I also thought of using a trigger on insert, but I am not sure if the trigger runs after the insert, or before.

    I suspect a stored procedure is my best bet but I'd have to write one for each table since their columns and required in put values would be very different.

    OR, maybe there's a real easy way through some configuration on the database?  Doubtful.. but who knows.

    Anyway, all advice you can give me would be great!

     

    Thanks!!

    Barry

  • No quite understand your question. SQL Server will rollback the failed insert transaction automatically if the insert fails. You may provide some samples here.

  • If the insert fails I want to check the error code, and if the insert failed because the fixed size database is FULL, I want to DELETE the oldest row from the table in question, and re-INSERT the new data into the table.

    Do you understand First In, First Out? 

     

    Data that went in FIRST is the FIRST data to be removed when the fixed size DB is full.

  • I understand your question, can't figure out why you you want this.

    But rather than dealing with "Out of Space" the overhead doing what you are explaining would be ridiculous.  And because of keys, pages and ?? there is no guarantee if you delete one row.  The space would be available for your next insert.

    How about max number of rows.

    DECLARE Cnt AS INT

    SELECT Cnt = COUNT(*) from Table

    IF Cnt > ???

       DELETE FROM Table Where oldest rec criteria

    INSERT ....


    KlK

  • I need to do this because our system works 24 x 7 and must maintain new data in the database even though the file system is of limited size.  So we must purge old data and be able to add new data forever.  The database filling up is simply not an option.

    Your idea is a good one.  A fixed number of rows sounds like a better, more robust idea.

     

    Any other ideas or input are appreciated.

  • Try maintaining a count in a statistics table rather than running a select count(*) for every insert.

    After all, once the DB fills up almost every insert will require a delete anyway.

    Steve

  • You might consider adding constantly, then purge once a day (or more often ??), try and identify a quite time. Then come up with a purge routine.

    Without trying it, I would wager the overhead of one mass delete, versus lots of little ones would save a lot overall.

    And this would put less stress on the system when inserts are heavy. 

     


    KlK

  • If there is no primary key violation (which there probably should be in an ideal schema), you can join the table to itself and delete all older records in the Trigger:

    delete t1

    From Table1 t1

    JOIN Table2 t2

     on t1.CommonKey = t2.CommonKey

     and t1.Date < t2.Date

    If there is a primary key on the table, the only way to handle this is by doing the insert through a proc instead of directly against the table.  In the proc have logic to delete the record is it exists (easy to do), then insert the new record.  This would be a Clustered Index Seek (assuming your PK is clustered), a clustered index delete, and a clustered index insert; giving you optimal scalability.

    cl

    Signature is NULL

  • For our 24x7 website we have a number of tables that record session information, including each page a user visited. this table averages around 4-5mil records.

    To keep the size manageable, we run a job every hour which moves records older than 60 days into an archive database. We used to run it every day but found that it was causing blocking and timeouts during the large delete operation. Running it every hour means that a lot of small deletes occur.

    --------------------
    Colt 45 - the original point and click interface

  • Thanks for the great feedback everyone.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply