Dettaching Luns from failed active SQL Server to passive

  • We are planning on setting up a pseudo SQL Server 2005 clustered environment with one active server and an older server acting as the passive server. We are going to have a Lun attached to the main server that holds the Log and DB files. If the servers motherboard would fail or something else along those lines would happen, could we simply detach the Lun on the active (failed) server and reattach it to the passive Server?

  • If your setting up cluster and the main hdd on active node failed it will automatically failover to passive node because it will fail the heartbeat check. You don't have to do it manually...

    Thanks.

    [font="Arial"]---

    Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
    Microsoft FTE - SQL Server PFE

    * Some time its the search that counts, not the finding...
    * I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]

    How to ask for help .. Read Best Practices here[/url].

  • cannot see any point in doing that, you may as well just do it properly and set up a cluster.

    If you don't want the cost of a cluster then investigate mirroring or log shipping for your High availability solution.

    Your suggestion would possibly not work. the databases would have to be attached to the othe SQL instance, and as they were not cleanly detached, this has a chance of failing.

    ---------------------------------------------------------------------

  • A true cluster is the right way to go.

    However, If you REALLY MUST pursue this. I would move all your system DB's to the Luns you will detach/attach.

    Make sure the passive is setup to use IDENTICAL locations for system DB's as the active. When you fail over from active to passive, you must shut down SQL on the passive, rename the existing Lun/Drive containing the passive system DB's then add the Lun from the active server under the correct Lun/Drive name. Start SQL and it should come up. But there are many variables in play which is why a real cluster is the better way too go.

    Be aware that you must keep SQL patch levels the same and make sure the old server can handle any SQL options you have set for hardware resources. Also can the old server handle the expected load?

  • My old place used this approach, and it worked fine.

    The full setup they had was:

    1) Server 1 in DC1 in UK, all drives attached to SAN1

    2) SAN 1 replicating SQL Server LUNS to SAN 2 in DC2 in US

    2) Server2 in DC2 in US.

    When they wanted to do a failover, the (simplified) process was:

    a) Shut down SQL on Server1

    b) Set SQL drives offline to Server 1 (drives must be BASIC, not DYNAMIC!!)

    c) Stop SAN replication UK to US

    d) Change DNS aliases to point to Server2

    e) Set SQL drives online to Server2 in US

    f) Start SQL on Server2 in US

    g) Perform tests that applications can access DBs OK

    h) Start SAN replication US to UK

    All of this was scripted, and a failover normally completed in 15 minutes. Both data centers were treated as peers, with a mandatory failover each month.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • EdVassie (3/16/2009)


    My old place used this approach, and it worked fine.

    The full setup they had was:

    1) Server 1 in DC1 in UK, all drives attached to SAN1

    2) SAN 1 replicating SQL Server LUNS to SAN 2 in DC2 in US

    2) Server2 in DC2 in US.

    When they wanted to do a failover, the (simplified) process was:

    a) Shut down SQL on Server1

    b) Set SQL drives offline to Server 1 (drives must be BASIC, not DYNAMIC!!)

    c) Stop SAN replication UK to US

    d) Change DNS aliases to point to Server2

    e) Set SQL drives online to Server2 in US

    f) Start SQL on Server2 in US

    g) Perform tests that applications can access DBs OK

    h) Start SAN replication US to UK

    All of this was scripted, and a failover normally completed in 15 minutes. Both data centers were treated as peers, with a mandatory failover each month.

    Ed, hello again. The process above is very much what we are now looking to put into place to give us a fast failover (and failback), and DR capability on the SAN as well. (using SRDF)

    The area that concerns me is how to handle patching SQL in this environment. ALL the databases are on SAN attached drives but the C drive is local, and obviously a SQL patch can make changes to executables and the registry on the C as well as to the system databases, so how do you handle an upgrade to keep both servers in-line?

    ---------------------------------------------------------------------

  • It is worth looking at what patches are being applied. You can normally patch Windows on a server by server basis without risk to this type of environment.

    When you get to SQL Server patches then patches typically affect system databases more than user DBs. This means you can patch each server at separate times and do standup tests on each server before moving on. Even so you probably want to do them all in the same maintenance window.

    If you are worried about the impact of the patches stopping you working, this is what pre-production environments were invented for. You can never eliminate risk but you can minimise it by testing the patches in a near-identical non production environment. My old place had their pre-prod environment split between UK and US just like production.

    You can also look at getting the system disks on to the SAN. It is easier to take and roll back a LUN image then it is to deal with a local disk.

    If rolling upgrades are a must-have, then the solution is to use a cluster. If you can afford to take an hour or two outage at patch time then you can use the technique in your OP.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Ed,

    I may not have worded the question very well. My only concern in this type of environment would be having applied a SQL upgrade to the current live box, the system db changes would be replicated to the standby server but the .exec, .dll, registry etc changes would not. So the standby would be in a half upgraded state and we would have problems properly applying the upgrade to it.

    So I may be over thinking this but at the current time I am not sure of the best way to handle this

    a) stop the SAN replication, apply upgrade to both boxes, restart replication

    b) do not replicate disk with system dbs, handle keeping in line as you would for say logshipping or mirroring

    c) put 'C' drive on SAN

    d) cut an image to tape of C drive and copy this over C drive of standby server (automate this to happen nightly and on-demand.)

    Right now I favour d) as best way forward. (least change from where we are now)

    regards

    george

    ---------------------------------------------------------------------

  • At my old place the system DBs were on different disks to the user DBs. The user DB disks were replicated but the system DB disks were not.

    Each server had its own system DBs, which allowed patching to be done independantly on each server. If you try to share system DBs between a primary and secondary server this adds a lot of complexity and is best avoided.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • EdVassie (3/16/2009)


    At my old place the system DBs were on different disks to the user DBs. The user DB disks were replicated but the system DB disks were not.

    Each server had its own system DBs, which allowed patching to be done independantly on each server. If you try to share system DBs between a primary and secondary server this adds a lot of complexity and is best avoided.

    cheers for the input Ed

    ---------------------------------------------------------------------

  • Thanks for your responses.

    I have been pushing for a clustered environment, but our VP doesn't want to setup a true clustered environment due to our current environment setup and the costs associated.

    We have setup the Passive server to be identical to the Active server. It will be able to handle the workload in case of a failover.

    I'm still worried that the databases won't detach cleanly.

  • how many databases, mirroring could be seen as a poor mans cluster (no insult intended!)

    ---------------------------------------------------------------------

  • We currently have around 20 databases on our Active server.

  • thats rather a lot for mirroring. Doable with log shipping though.

    Both mirroring and log shipping have the advantage of duplicating the disk storage, improving resilience there.

    If you must persist with this my feeling is it would work with a clean shut down but not necessarily if the server had crashed due to some fault. the databases would need to go through recovery when restarted so a re-attach to a different server might fail. As suggested previously this would mean putting your system dbs on the luns as well to give a consistent picture when the luns are re-attached. But that gives you a problem keeping both servers patched up to same level, so you either need to detach ALL drives or have a way to keep local drives in synch as well.

    As you see, its getting complicated.

    ---------------------------------------------------------------------

  • george sibbald (3/16/2009)


    thats rather a lot for mirroring. Doable with log shipping though.

    I have over 30-40 databases running on my SQL Server 2005 server with no issues; have you seen any issues with high number of databases in mirror state?

    Mohit.

    [font="Arial"]---

    Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
    Microsoft FTE - SQL Server PFE

    * Some time its the search that counts, not the finding...
    * I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]

    How to ask for help .. Read Best Practices here[/url].

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply