Transactional Replication trought Internet

  • Hi,

    For the moment, I’m trying to put in place the best solution for my problem. All help is welcome.

    To begin, I describe the structure:

    We have 2 servers: production and reporting server. Reporting server is a replication of the production server. The reporting server doesn’t update data which comes from production. It means this a one way replication. We have daily update for the reporting.

    The production DB is average 1.4Gb. The table replicated took several hundred Mb.

    In my view, I see a Transactional Replication. It’s not possible to use a Snapshot Replication (several hundred Mb), I think maybe that I’m wrong? That’s why for the moment we have a Transactional Replication with Log Reader. The Log Reader is only launch once by day.

    Questions:

    Which kind of replication is the best for me?

    How do you set up the Log Reader? Always activated, once by day but always before the Subscriber try to replicate… Give me some details please.

    The second thing concerned FTP. The replication allows anonymous subscriber. The initial snapshot compressed is on a FTP. No problem with that.

    Questions:

    How SQL Server manage the daily differences with a Log Reader? I suppose that it doesn’t change anything if I have the Replication Folder on a FTP Server

    If I have a Log Reader, the subscriber must be connected with the publisher (reporting must be connected with production) Is-it correct?

    I suppose that it’s not possible to use only the FTP to provide the daily update?

    I also suppose it’s directly done between the servers. Could you give me more information about how it works? I’m really interested by the data exchanged between servers.

    Now, I will add more complexity. In fact, we try to know if it’s possible to work in a “disconnected mode”. At the end, we want to know if it’s possible with a 128K Connection available only available during the night for transferring the Modified Production Data.

    If we work with snapshot replication, the problem is easier but transfer the snapshot will be too long, I think.

    If we work with transactional replication, the problem will be the connectivity between the servers. For example, now we have the configuration described higher on a 100 Mb LAN and it took average 10 minutes each day. On a 128K Connection it’s impossible. To give more details, about our current stats on the 100Mb LAN:

    Transactions Commands Duration (sec) Delivery Rate (cmds/sec) Latency (ms)

    26065 32299 02:09 412.614 68409190

    63872 75238 05:11 1421.92 34453930

    82182 121910 10:24 393.438 34674115

    41142 49324 02:39 512.526 34079211

    461 643 00:07 153.009 64706982

    109505 261296 19:04 295.161 35002519

    17833 22849 01:24 688.943 33590065

    16107 41946 02:55 402.23 37634394

    14469 17881 01:20 1290.98 36036003

    For me it’s impossible to put the replication in place with a 128K connection and also impossible to transfer daily updated in a compressed file by FTP.

    I know that it’s possible to do it, but not with the standard tools. What do you think about that? How do you think it’s possible to do it?

    Maybe we can ask to have a better connection than 128K. But do we need a permanent connection? I know that VPN is slower due to the encryption. And maybe all stuff exchanged between servers is too big.

    Thanks for your help.

    Jérôme

  • This was removed by the editor as SPAM

  • lots of questions? I think that you are wrong with the concept of log reader and the use of ftp.

    The desicion of using transactional replication is ok, remember that is replicates incremental changes to the piblisher db. The work of the log reader is to scan the transaction log of the publisher, and to copy all the transactions marked with replication to the distribution database. The transactions that affect the tables (articles) that are part of your publication with travel to the distribution db, once the log reader reads then from the t-log. Then the distribution agent will deliver all the transaction waiting in the distribution db to your reporting server (the subscriber). Living the log reader to be running continuosly is not a problem, if is doesn't find any transaction, then it will be idle. But it also depends of how much activity you have on your db. The log reader is independent, it does anything to do with your reporting server. But the distribution agent depends also of your reporting server, because it's must connect to it, to deliver the transactions. I would let the log reader runnging continuosly, and run the distribution agent during all day with a couple of hours of interval. But all depends on your schema.

    Now FTP!!. The ftp folder it is use during the first sincronization with your server when you are applying the snapshot (needed to start transactional replication). SQL, will save the snapshot.cab (if it is compressed) archive in an ftp folder to enable the subscribers to download the snapshot and start sincronizing. But this is the only moment when you will use it.

    You can be disconnected will the log reader is running and once you connect run the distribution agent. Remember that it is a job, so when you connect you can run that job, and the distribution agent will check in the distribution db for all pending transactions to be delivered, and then replicate them to the subscriber. So you can use a "disconnected mode".

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply