which is best method for upgrade database 650GB?

  • Hi,

    old verision

    OS -win 2003 enterprise edition

    DB -SQL Server 2000 enterprise edition with sp4

    DB Size - 650GB

    New version

    OS -win 2008 R2 enterprise edition

    DB -SQL Server 2008 R2 enterprise edition with sp4

    Upgrade Method choosing - side by side

    New version installed with one spare server is ready, only database is pending for upgrade.

    Please suggestion me, which is best method as below & avoid maximum downtime for upgade database size 650GB?

    1. Backup and Restore - this will take more time for restore.

    2. Detach and Attach

    3. Export and Import

    4. Log shpping

    5. copy database wizard

    thanks

    ananda

  • First of all this is a significant change considering that a whole lot has changed from SQL 2000 to SQL 2008.

    Therefore the first thing that you would want to confirm is whether the application compatibility with the new version has been confirmed with SQL Server 2008.

    if that has been done indeed and from the perspective that your focus is primarily on the reducing downtime window and also weighing in the fact that the database is large in size one of the options that you may want to consider would be first configuring Log shipping from your source server = SQL 2000 to your destination server = SQL 2008 and eventually make the SQL 2008 the primary server during the downtime window.

    or maybe you can keep the database restored in secondary(in no recovery mode) and then almost in sync by applying differential backups and then eventually apply that last differential and log backup during the downtime window.

    But do keep in mind that you will also have to factor in the migration of logons , DTS packages and some other aspects liked linked servers(if any) as well.

    Using copy wizard or the import / export would be relatively more time consuming.

    Detach \ Attach option may be a bit risky as well. Further you will have to copy the large database files over the network so that may take significant amount of time as well. All in all detach \ attach may not be a good option. If I have to put it in to perspective I mean I would be very nervous with detach \ attach option but then I could be thinking too pessimistically over here.

    Backing up the entire database, copying over the backup files and then restoring them may take significant amount of time as well unless ofcourse you could back up the database to a SAN drive on the source server and detach that SAN drive and reattach it to the new server to save both network bandwidth and time as well.

    I would like to mention explicitly though that i am still learning SQL Sever and only trying to throw up ideas based on my current understanding that may be of help to you.

    And I would not be surprised if i am proven wrong (but hoping that will not be the case) in my reasoning.

    Therefore please wait for additional replies on this thread by more experienced and knowledgeable folks before you plan anything.

    In any case it would be imperative that you perform extensive checks in your test environment before you even remotely think of attempting anything in a production environment.

    Most of the common issues faced after upgrades revolve around application compatibility and database objects or data types that the application(s) may be using earlier in SQL 2000 but which are longer in existence in SQL 2008.

    if you have to ask me still staying with SQL 2000 is quite risky as well, the sooner you move to a higher version the better.

  • As vikrantspatil has pointed out you need to ensure beforehand that application compatibility is suitable for SQL server 2008 R2.

    The other action i would perform first is to take a copy of the database to a test 2008 server and run DBCC UPDATEUSAGES and then DBCC CHECKDB WITH DATA_PURITY, check the output of these queries come back totally clean, if not you'll have some work to do cleaning up the catalogs\data.

    It is important you execute these two commands when upgrading the from SQL Server 2000.

    Once you're ready to migrate, log shipping would be the best option to minimise downtime.

    vikrantspatil (7/14/2012)


    Detach \ Attach option may be a bit risky as well. Further you will have to copy the large database files over the network so that may take significant amount of time as well. All in all detach \ attach may not be a good option. If I have to put it in to perspective I mean I would be very nervous with detach \ attach option but then I could be thinking too pessimistically over here.

    Its not risky at all, in fact I generally prefer detach\copy\attach over backup\copy\restore. If you can arrange the downtime you may well find detach\attach quicker than backup\restore.

    vikrantspatil (7/14/2012)


    Backing up the entire database, copying over the backup files and then restoring them may take significant amount of time as well unless

    The restore will take the same time as the backup, on a large database, couple this with copying the backup file and the timeframe can be quite large.

    With detach\attach they are normally fairly quick the only cost is copying files across the network. If you have a fast network or can provide a temporary private network between nodes you can negate this cost.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • ananda.murugesan (7/14/2012)


    Hi,

    old verision

    OS -win 2003 enterprise edition

    DB -SQL Server 2000 enterprise edition with sp4

    DB Size - 650GB

    New version

    OS -win 2008 R2 enterprise edition

    DB -SQL Server 2008 R2 enterprise edition with sp4

    ??

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Hi

    Thanks for reply

    New version - SQL 2K8 R2 enterprise edition with SP2.

    Alreday tested with application with SQL 2K8 R2 test setup and compatible with SQL 2K8 R2. data files are not attached any SAN devices.

    I am Preferred go for backup and restore method and not getting more down time while upgrade this server. as per current backup weekly full backup and daily Tlog backup every 1/2 hrs.

    During restore backup files to new server, first apply last full backup with no recovery option then apply every Tlog backup and last Tlog restore with recovery options.

    as of now SQL 2000 database have lot of interface to connect many database servers for getting other data which required this applications.

    after upgrade to new server then I need to change all interface ip address manually with new server IP address.

    In this Situation, after restore all backups to new servers then remove the network from old server. whatever mention old server IP the same IP will be assigned to new server also change SQL side. in this case no need to modify all interface even application also..

    Mr.perry suggest go for Log shipping method also minmum downtime. source server SQL 2000 and destination server SQL 2008 R2. But I don't have Exposure configur log shipping with different version. Can you share me any document for the same. I will tested it.

    Thanks

    ananda

  • You understand this?

    Perry Whittle (7/16/2012)


    run DBCC UPDATEUSAGES and then DBCC CHECKDB WITH DATA_PURITY, check the output of these queries come back totally clean, if not you'll have some work to do cleaning up the catalogs\data.

    It is important you execute these two commands when upgrading the from SQL Server 2000.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • yes, understand these two commands

    DBCC UPDATEUSAGE corrects the rows, used pages, reserved pages, leaf pages and data page counts for each partition in a table or index.

    DBCC CHECKDB with DATA_PURITY

    to ensure database integrity.

    These processes check allocation, structural, and logical integrity of the object or objects and report back any inconsistencies that are found.

    those two command should run the database after upgrade from lower version to higher version.

    With 650 GB database size run these two commands, should take more time to finish. how to reduce the time during completion of these two commands.

  • ananda.murugesan (7/16/2012)


    With 650 GB database size run these two commands, should take more time to finish. how to reduce the time during completion of these two commands.

    Short of deleting all your data none, they'll take as long as they need. Ideally run them with no active users

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • ananda.murugesan (7/16/2012)


    New version - SQL 2K8 R2 enterprise edition with SP2.

    Sure about that? 2008 R2 SP2 has not been released, it's in CTP, and I would not suggest using a CTP on a production server, much less a newly upgraded production server. I would generally recommend SP1 and test SP2 once it's released.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • ananda.murugesan (7/16/2012)


    yes, understand these two commands

    DBCC UPDATEUSAGE corrects the rows, used pages, reserved pages, leaf pages and data page counts for each partition in a table or index.

    DBCC CHECKDB with DATA_PURITY

    to ensure database integrity.

    These processes check allocation, structural, and logical integrity of the object or objects and report back any inconsistencies that are found.

    those two command should run the database after upgrade from lower version to higher version.

    With 650 GB database size run these two commands, should take more time to finish. how to reduce the time during completion of these two commands.

    They will take as long as they take.

    In addition to these two it is critical that you either update ALL your statistics:

    USE YourDatabaseName;

    EXEC sys.sp_MSforeachtable

    @command1 = N'UPDATE STATISTICS ? WITH FULLSCAN;';

    -or-

    REBUILD all your indexes and update all your COLUMN statistics.

    USE YourDatabaseName;

    EXEC sys.sp_MSforeachtable

    @command1 = N'ALTER INDEX ALL ON ? REBUILD WITH (SORT_IN_TEMPDB=ON); UPDATE STATISTICS ? WITH FULLSCAN, COLUMNS;';

    Other [arguably optional] items to consider as well:

    - change compat level to 100 after verifying all database code (server-side and client-side) is compliant with SQL 2008 R2 syntax

    - change PAGE_VERIFY option to CHECKSUM

    - set database owner to something sensible in the new environment

    - take a full database backup immediately after the upgrade and post-upgrade steps are complete to create a baseline on the new system

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • In the past I have used a script like the one below to upgrade from sql 2000 to sql xxxx...

    SET NOCOUNT ON

    USE master

    --------------------------------------------------------------------------------------------------

    -- </YOUR DATABASE INFO/> RESTORE Process Started On </YOUR SERVER NAME/>

    --------------------------------------------------------------------------------------------------

    -- restore database

    -------------------

    USE master

    GO

    RESTORE DATABASE </YOUR DATABASE INFO/>

    FROM DISK = '</BACKUP AREA/>\</YOUR DATABASE INFO/>.bak'

    WITH NORECOVERY,

    MOVE '</YOUR DATABASE INFO/>_Data' TO 'E:\MSSQL.1\MSSQL\DATA\</YOUR DATABASE INFO/>.mdf',

    MOVE '</YOUR DATABASE INFO/>_Log' TO 'F:\MSSQL.1\MSSQL\LOG\</YOUR DATABASE INFO/>_log.ldf',

    STATS = 5

    GO

    -- set database to recovered

    ----------------------------

    RESTORE DATABASE </YOUR DATABASE INFO/> WITH RECOVERY

    GO

    EXEC dbo.sp_dbcmptlevel @dbname = N'</YOUR DATABASE INFO/>', @new_cmptlevel = 90;

    GO

    ALTER DATABASE </YOUR DATABASE INFO/> SET AUTO_CREATE_STATISTICS ON WITH NO_WAIT

    GO

    ALTER DATABASE </YOUR DATABASE INFO/> SET AUTO_UPDATE_STATISTICS ON WITH NO_WAIT

    GO

    ALTER DATABASE </YOUR DATABASE INFO/> SET AUTO_UPDATE_STATISTICS_ASYNC ON WITH NO_WAIT

    GO

    EXEC </YOUR DATABASE INFO/>.dbo.sp_changedbowner 'sa'

    GO

    --------------------------------------------------------------------------------------------------

    -- </YOUR DATABASE INFO/> RESTORE Ended

    --------------------------------------------------------------------------------------------------

    USE </YOUR DATABASE INFO/>

    PRINT ''

    PRINT 'ALTER RECOVERY MODE SET TO SIMPLE'

    PRINT ''

    ALTER DATABASE </YOUR DATABASE INFO/> SET RECOVERY SIMPLE

    GO

    PRINT ''

    PRINT 'START DBCC CHECKDB'

    PRINT ''

    DBCC CHECKDB WITH DATA_PURITY

    GO

    PRINT ''

    PRINT 'START UPDATEUSAGE'

    PRINT ''

    DBCC UPDATEUSAGE (</YOUR DATABASE INFO/>) WITH COUNT_ROWS

    GO

    PRINT ''

    PRINT 'START INDEX REBUILD'

    PRINT ''

    SET QUOTED_IDENTIFIER ON

    BEGIN

    DECLARE @tb_owner VARCHAR(25),

    @tb_name VARCHAR(50),

    @index_name VARCHAR(100),

    @index_type_desc VARCHAR(20),

    @fill_factor INT,

    @index_cmd VARCHAR(1000)

    DECLARE tb_cur CURSOR FOR

    SELECT c.name as table_owner,

    a.name as table_name,

    b.name as index_name,

    B.type_desc as index_type_desc,

    CASE WHEN b.fill_factor = 0 THEN 100 ELSE b.fill_factor END as original_fill_factor

    FROM sysobjects a (NOLOCK)

    INNER JOIN sys.indexes b (NOLOCK) on a.id = b.object_id

    INNER JOIN sysusers c (NOLOCK) on a.uid = c.uid

    WHERE a.type = 'U' and

    b.index_id NOT IN (0,255)

    ORDER BY a.name asc, b.type, b.name

    OPEN tb_cur

    WHILE 1 = 1

    BEGIN

    FETCH NEXT FROM tb_cur

    INTO @tb_owner,

    @tb_name,

    @index_name,

    @index_type_desc,

    @fill_factor

    IF @@fetch_status <> 0

    BREAK

    -- does each index individually

    SET @index_cmd = 'ALTER INDEX [' + @index_name + '] ON [' + @tb_owner + '].[' + @tb_name + '] REBUILD WITH (FILLFACTOR = ' + CAST(@fill_factor as VARCHAR(3)) +', STATISTICS_NORECOMPUTE = OFF)'

    EXEC (@index_cmd)

    PRINT @index_type_desc + ' INDEX ' + CHAR(9) + @index_name + ' From ' + @tb_owner + '.' + @tb_name + ' Has Been Rebuilt' + CHAR(10)

    END

    CLOSE tb_cur

    DEALLOCATE tb_cur

    END

    PRINT ''

    PRINT 'REMOVE OLD EXECUTION PLANS'

    PRINT ''

    DBCC FREEPROCCACHE

    GO

    DBCC DROPCLEANBUFFERS

    GO

    PRINT ''

    PRINT 'ALTER RECOVERY MODE SET TO FULL RECOVERY'

    PRINT ''

    ALTER DATABASE </YOUR DATABASE INFO/> SET RECOVERY FULL

    GO

  • I like that you set the DB to SIMPLE recovery before rebuilding indexes but I am wondering why you are freeing proc cache and dropping clean buffers? If you're restoring to an instance with live databases on it that can be detrimental to performance. If you were trying for a complete reset to a cold buffer for your newly restored database you would want add a checkpoint before you drop clean buffers. I cannot envision a need to do any of this but am curious now. Was adding those to the script a response to a specific condition you wanted to address?

    As an aside, nothing towards functionality just best practices, I would swap out the compatibility views (e.g. sysobjects and sysusers) and system procs (e.g. sp_dbcmptlevel & sp_changedbowner) for the current catalog views (e.g. sys.objects and sys.database_principals) and ALTER DATABASE syntax.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Also note that EXEC dbo.sp_dbcmptlevel is deprecated. ALTER DATABASE ... SET COMPATIBILITY_LEVEL

    I'd add updating the non-index statistics to that, currently it only updates the stats on the indexes that get rebuilt.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (7/17/2012)


    I'd add updating the non-index statistics to that, currently it only updates the stats on the indexes that get rebuilt.

    I noticed that too and forgot to mention it. I provided a call to the undocumented 'for each table' proc for that in an earlier post.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • opc.three (7/17/2012)


    I like that you set the DB to SIMPLE recovery before rebuilding indexes but I am wondering why you are freeing proc cache and dropping clean buffers? If you're restoring to an instance with live databases on it that can be detrimental to performance. If you were trying for a complete reset to a cold buffer for your newly restored database you would want add a checkpoint before you drop clean buffers. I cannot envision a need to do any of this but am curious now. Was adding those to the script a response to a specific condition you wanted to address?

    As an aside, nothing towards functionality just best practices, I would swap out the compatibility views (e.g. sysobjects and sysusers) and system procs (e.g. sp_dbcmptlevel & sp_changedbowner) for the current catalog views (e.g. sys.objects and sys.database_principals) and ALTER DATABASE syntax.

    When the script was written sometime ago (for a 2000 to 2005 upgrade)...it was going from server A to server B...

    I had gotten the recommendation when I made a similar post about my first upgrade from 2000 to 2005...

    http://qa.sqlservercentral.com/Forums/Topic777615-146-1.aspx#top

    At the time, it made sense and I didn't think it would hurt anything since it was a backup, move, restore operation to a totally new server (never used before the cut over).

    I suppose from a different angle that could be viewed as a terrible step if you are simply moving over one database on an existing production server so glad you pointed that out.

    I hadn't known about the CHECKPOINT recommendation...after looking it up I understand why you posted that as well...

    http://beyondrelational.com/modules/2/blogs/77/posts/11359/dbcc-dropcleanbuffers-needs-to-be-preceded-by-a-checkpoint-to-remove-all-buffers.aspx

    Thanks for the insight and discussion.

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply