database refresh

  • We need to make an exact copy of one existing database to another existing database on the same box. Is the best way to go about this to drop the database to be copied to, copy the files and attach? Any advice would be appreciated because I'm very new to this. Thanks

  • Backup and restore is one option. Detach/attach is another one but it will disconnect your original database. If the DB is not huge, backup/restore will do it.

  • thank you for the recommendation

  • The backup operation and backup log together are a good approach to do it.

    Sametimes, you cannot stop the prodction server to proccess attach/dettach database.

    Hildevan O Bezerra


    Hildevan O Bezerra

  • Will detatch/attach keep the original database? Will the following steps work?

    1.Detatch database

    2.Make a copy of the detached database file

    3.Attach both files: one to the original db, one to the new db

  • Yes, it will work as long as you can afford taking the original DB offline for a few moments.

    quote:


    Will detatch/attach keep the original database? Will the following steps work?

    1.Detatch database

    2.Make a copy of the detached database file

    3.Attach both files: one to the original db, one to the new db


  • Obviously, detatch/attach (with file copy) requires less down time than backup/restore. Any pros and cons with either approach?

  • We have a largish (130g) database we need to refresh. We also need to build it without conflicts with online user queries. We found the best bet:

    - One copy is the "build" copy, no user access.

    - At night, we take it offline and copy it to a "stage" copy on the same drives as the one the users need. We actually copy it to two separate machines. Downtime is not too relevant here as the users do not "see" either of these.

    - In the wee hours, we detach the two user copies people are hitting (but usually not hitting it then), and rename these copies (near instant) and reattach. So the 'swap' takes maybe a minute or less of downtime.

    All this takes a lot of extra disk space, but it allows us to refresh a largish database with near-zero downtime and zero locking conflicts during updates.

    Log shipping has not worked for us for this as the log recovery fails if someone is in the database as you try to restore it. We keep some warm spare copies updated like this, but it is not useful to keep an on-line copy updated. Unless we are missing something.

  • I am confused about your scenario. Are you refreshing a production copy with a development copy? Why do you copy it to two separate machines?

    To make a warm stanby of a production server, you can't take the production offline for your file copy. I did not get the point. Please clarify. Thank you.

  • Obviously your needs may vary, but here is our need and approach.

    We want a copy that people can query without being impacted by updates, i.e. lock-free.

    We need a copy to build against many hours of the day, doing updates. This cannot be the same copy as the above (build = locks).

    We also want a backup server (we are not clustering). So the copy people can query needs to be on two separate servers. This also distributes the load, it's not simply a backup copy.

    So here's how it goes:

    On build machine:

    1) One copy for building

    2) Room for one copy which is a 'stage' copy, normally not there

    3) One copy for queries

    On backup machine:

    1) One copy for queries

    2) Room for one copy which is a "stage" copy, normally not there.

    Somteime in the middle of the night we:

    1) Let the builds finish, then detach the build copy (query copy still up)

    2) Make a copy of it on the same machine to the "stage" area. Each filegroup is stored on the same volumn in a separate directory from the still-up query copy

    3) Kick users out of query copy (it's middle of the night), detach, and delete.

    4) Rename/move the "stage" copy to the same directoy as the query copy. Reattach the query copy. This effectively updated it with the new version, and did so in about 30 seconds.

    5) Build copy is still off line, so we now copy it to the other machine in the "stage" area.

    6) when copied, we do the same thing on the other machine, detaching the query copy, deleting, and rename/move over. Again, total down time for the query copy is about 30 seconds.

    The affect of all this is that we can build and update for hours, doing a LOT of lock intensive processing but doing so with a database absoltuely no users are connected to. Meanwhile the users have a copy they can query which is completely free of locks. And the time to swap them (in terms of user interruption) is seconds. Admitedly the time the build database is down is much longer, it takes several hours to copy them around.

    The fact we use two machines is unrelated to the approach we are using to swap the build and query copies.

    It's all production databases, not development. It's a way to manage a heavy build/update schedule with a heavy query schedule.

  • Thank you for the clarification. Now I understand that your query copy is the 'production' database and 'attach' did work better for you than the restore. I think you still could save a lot of 'file copying' time by logshipping the transactions to a staging database and then perform your detach and attach trick...

  • We use log shipping for some databases. We have three basic ways we do this whole process - transactional replication, the one described above, and log shipping.

    We use transactional replicaiton where the copies need to be live and more or less current and updatable.

    We use log shipping where replication has too many problems (e.g. code not ameniable to it), and we want a current standby copy, and can't take the main system offline even for a few minutes.

    And we copy the whole database when we want a simple and reasonably fast copy and can take the original offline for a while. Extremely simple and understandable in comparison to the otehr two.

    All have their drawbacks. Log shipping has a LOT of overhead in terms of processor and disk, as well as a lot of setup complexity and things to break. We like it, we use it, but not everywhere.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply