Migrate 2K to 2K5 - with minimum downtime

  • Hi All

    I support an application suite that is widely deployed (200+ clients) on SQL 2000 SP4. Each installation includes up to 16 databases that range in size from a few hundred meg to around 100Gig.

    I am trying to devise a strategy to move the entire application to SQL 2K5 SP2 in 9.0 mode with the absolute minimum of downtime. It is not possible to treat this as 16 migrations. Each of the applications generally access multiple databases and there is so much overlap that I have not found a way to subset the databases into independent groups for moving.

    The migration will also involve some feature additions and defect resolution that will likely modify some of the tables. It is likely that the upgrade will involve at least some schema changes.

    Some clients will choose to upgrade by installing a new instance of 2K5 on their existing server and others will choose to upgrade to a new server at the same time.

    The two obvious options Detach/Attach and Backup/Restore are not attractive in the case of a new server because of the time required just to copy the files between systems.

    One team member suggested setting up a sort of a log shipping, replication environment during the Upgrade/Migration process then at some point, separating the systems and disabling the original 2K side. I am not sure about log shipping from 2K to 2K5 particularly with the likely schema changes.

    Any Ideas?

  • Hi Ray

    If the client is going for a New server or if you got more space then you can use Log shipping, its the Easy way to Migrate from 2k to 2005, I have done it myself, when you have a new server then you can also check for the logins and other schemas on the new server,

    Please note, when you create a Logshipping create the destination server as a read only mode, so you can test most things on the server, and once its completed, write a script that changes all the databases and server names.

    Replicaiton from 200 to 2005 invloves much more space, usually SQL creates a new column to each database called MS_repl something.

    Cheers

    🙂

  • Thanks.

  • Ray Herring (9/23/2008)


    The two obvious options Detach/Attach and Backup/Restore are not attractive in the case of a new server because of the time required just to copy the files between systems.

    for detach and restore you could take the database offline or detach as normal, copy the files to the local disk (should be quicker) then copy to the remote server at your leisure.

    for backup and restore, if you use a product such as Redgate SQL Backup it will compress the files by up to 90% and they'll be more managable when moving them around

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • I havent worked with RedGate. We use Lite speed. it is also pretty good at compressed back ups. If you have a new server, I second the idea of Log Shipping.

    Replication will create a column only if the table that is being replicated does not have a Primary Key. If Primary key is defined and the transaction replication is the chosen method, that should not be an issue.

    -Roy

  • Thanks for the replies. I understand that Red Gate or Idera tools can compress the files, etc. which will reduce the time to copy but I don't have the "leisure" time to move and restore them.

    During the hour(s?) required for the compressed backup, copy, restore of each of the databases, the application will still be processing events. When we cut to the new system I need a way to capture those events and get them into the new live system. Since each "event" involves potentially dozens of database transactions across multiple databases I am concerned about my ability to provide an automated, reliable, repeatable method for 200+ sites, each of which has a unique configuration (disks, servers, etc).

    I have had two log related suggestions. One is actual log shipping the other is manual log shipping. In the later, we would bring up the new system (R/O) then at the last minute take a log backup of the old system, copy the logs and restore them on the new system. Turn off the old, turn on the new. Total down time 10 to 15 min (that's the rumor anyway). My concern is that the schemas of several of the key tables will be changing as part of the upgrade (2 entire databases may be merged into 1) and I just don't see how any type of log shipping can handle that level of change.

  • You can implement your own version of log shipping without the overhead from SQL.

    Run a backup, move that, while it's moving, run a diff, move that, then while things are copying/restoring, take log backups and move them. When you get to a small interval, cut off the main db, get the last logs over, restore them and go.

    Shouldn't be too hard to automate, though I suspect you'd want to cut these one at a time to be sure they're working.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply