impact of database file split on log shipping

  • We did a minor database split a few months ago and the log shipping standby box ( which only has a large C drive ) wound up with the new files -- all MDFs -- on the C drive. The production database has one MDF and 4 NDFs as a result of the split.

    Next up is a much larger split involving new LUNs on our SAN and I'm wondering how these transactions will play out on the log shipping sql server?

  • I didn't understand your question, can you explain more in detail?

  • Normally with log shipping, every transaction that occurs on the primary server/database is executed on the log shipping standby copy of that database. Similar to replication or database mirroring but log shipping works using transaction log backups.

    So if you standby server has a completely different disk drive arrangement, I'm wondering how well or identical these transactions execute on the standby which in our case only has one drive, not a SAN.

  • Now if I understood you correctly your stanby server has only one drive, in this case what you have to do is before configuring your databases for logshipping, initially take full backup of production database and restore the database in norecovery option with move option and place all data and log files in drive 'C' once restoration is done, you can configure for logshipping.

    Drivelay out is not that important with regards to log shipping.

  • Log shipping has been running in this environment for years. I'm just wondering what happens to the databae on the log shipping sql server when major file split transactions are executed on the primary, given that the standby only has one drive.

  • How did you split your database files on your primary server?

  • Starting with just one MDF, we added four file groups and partitioned out the data based on client type with one new NDF file in each of the new file groups.

    Scripts were similar to this:

    ALTER DATABASE TestDatabase ADD FILEGROUP [CompanyAFileGroup]

    GO

    ALTER DATABASE TestDatabase ADD FILE ( NAME = N'CompanyAData', FILENAME = N'K:\Data\TestDatabase_CompanyAData.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ) TO FILEGROUP [CompanyAFileGroup]

    GO

    ALTER DATABASE TestDatabase ADD FILEGROUP [CompanyBFileGroup]

    GO

    ALTER DATABASE TestDatabase ADD FILE ( NAME = N'CompanyBData', FILENAME = N'K:\Data\TestDatabase_CompanyBData.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ) TO FILEGROUP [CompanyBFileGroup]

    GO

    USE TestDatabase

    GO

    -- create the partition function

    CREATE PARTITION FUNCTION [OrgDataPartitionFunction](INT)

    AS RANGE LEFT FOR VALUES

    (

    -- CompanyA CompanyB

    1, 10

    )

    GO

    -- create the partition scheme

    CREATE PARTITION SCHEME [OrgDataPartition]

    AS PARTITION [OrgDataPartitionFunction] TO ([CompanyAFileGroup], [CompanyBFileGroup], [PRIMARY])

    GO

  • Indianrock (8/12/2009)


    Starting with just one MDF, we added four file groups and partitioned out the data based on client type with one new NDF file in each of the new file groups.

    Scripts were similar to this:

    ALTER DATABASE TestDatabase ADD FILEGROUP [CompanyAFileGroup]

    GO

    ALTER DATABASE TestDatabase ADD FILE ( NAME = N'CompanyAData', FILENAME = N'K:\Data\TestDatabase_CompanyAData.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ) TO FILEGROUP [CompanyAFileGroup]

    GO

    ALTER DATABASE TestDatabase ADD FILEGROUP [CompanyBFileGroup]

    GO

    ALTER DATABASE TestDatabase ADD FILE ( NAME = N'CompanyBData', FILENAME = N'K:\Data\TestDatabase_CompanyBData.ndf' , SIZE = 2048KB , FILEGROWTH = 1024KB ) TO FILEGROUP [CompanyBFileGroup]

    GO

    USE TestDatabase

    GO

    -- create the partition function

    CREATE PARTITION FUNCTION [OrgDataPartitionFunction](INT)

    AS RANGE LEFT FOR VALUES

    (

    -- CompanyA CompanyB

    1, 10

    )

    GO

    -- create the partition scheme

    CREATE PARTITION SCHEME [OrgDataPartition]

    AS PARTITION [OrgDataPartitionFunction] TO ([CompanyAFileGroup], [CompanyBFileGroup], [PRIMARY])

    GO

    You have configured logshipping when database has only one mdf file, now you have modified database by adding couple of datafiles to new files groups.

    In order logshipping not to be out of sync, find out which transaction log backup file has backed up these changes, and apply that transaction log with move option to move newly data files that were added on primary server.

    Once you do this, then logshipping will not have any issues. If not restore job will fail for thsi transaction log backup file as it couldn't find the location what it has backed up.

    If you still have issues, it is better to take full back up and restore full database in norecovery mode. This will solve your issue.

  • Sorry. Now I recall that we actually "killed" log shipping during the file split in May because with the primary in full recovery mode it was bogging down the server. Then log shipping was reinstated by restoring a full backup etc etc.

    The restore of the full backup after the split created the new data files on the single C drive of the log shipping server. So now I'm wondering how a much more extensive split into multiple luns/drives will establish itself on a single drive standby server using the same approach, restore of full backup.

    [font="Courier New"]My guess is though that if the split had not bogged down our primary so bad and we left it in full recovery mode, log shipping would have executed the transactions on the standby without a problem.[/font]

  • Agree with murthykalyani. With the data files being changed, LS restore Job will fail if there is no adjustment on the Secondary server.

    Specifically, I would do the following steps :

    1. Before do the data file split/addition, DISABLE ALL the Logshipping Jobs ( LS- Backup, LS-COPY, and LS-Restore);

    2. Manually Run LS-Backup job,

    3. when 2 is done, manually run LS-COPY job;

    4. When 3 is done, Manually RUN LS-Restore Job;

    5. When 4 is done, Do data file Split/Addition;

    6. When 5 is done, Manually Start LS-Backup

    7. Manually start LS-COPY

    8. LOCATE the last copied Transactional Log backup on Share or Secondary server local directory;

    9. Under Master DB in Secondary server, RUN a Restore LOG Statement for that LOG file with MOVE all the DB files(including the newly added ones) in the Primary DB to the New locations onyour secondary server, AND make sure adding NORECOVERY;

    10. Manually run the LS-restore job (it should give no error)

    11. Enable all the LS jobs.

    ***Please test before applying to PROD.***

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply