DOES Full backup Clear out Transaction Logs?

  • I don't seems to be able find a reliable source of info on this!  I am implementing my Log shipping from Primary server to Secondary.  We have nightly Differential backup by Lagato and Full backup twice a week on the Primary server.  

    I am wondering when that differential or full backup happens....what happens to the transaction logs?  Will I still be able to ship the log over to the Secondary server capturing everything in between?  My fear is Full or differential backup might clear out my Transaction Logs... If that is the case my Log shipping will break.

    Can any experience DBA who have done that share some light?  I guess I will know also tomorrow ...  if my log shipping breaks!

    thanks in advance,

    JON

    SQL DBA-- casecentral.com

    DBA/OLAP Developer -
    San Francisco, CA

  • No.

  • The only ways to clear out (truncate) the transaction log are:

    1) transaction log backup

    2)Backup Log ...with NO_LOG | Truncate_ONLY:

    NO_LOG | TRUNCATE_ONLY

    Removes the inactive part of the log without making a backup copy of it and truncates the log. This option frees space. Specifying a backup device is unnecessary because the log backup is not saved. NO_LOG and TRUNCATE_ONLY are synonyms.

    After backing up the log using either NO_LOG or TRUNCATE_ONLY, the changes recorded in the log are not recoverable. For recovery purposes, immediately execute BACKUP DATABASE.

    See BOL: Backup Log for more info.

  • Thank you thank you!!! Indeed it doesn't clear out the log and I was being able to log ship the transactions to secondary server!!!! and apply the logs successfully.

    Nevertheless, I have a new problem... With Log shipping wizard choose to be "terminate user while loading" option and "standby option" for the log shipping jobs... My restore keep on failing while there are users connected to the read only database.  I want to keep my standby database read only... not "noreconvery"

    Any idea what's going on?  why doesn't the log ship maintainance plan work?

    DBA/OLAP Developer -
    San Francisco, CA

  • I had to incorporate a step that killed all of the users of the reporting database before applying the log. Here's a copy of the sp that I used (created in master).

    create proc sp_kill

      @dbname varchar(100) = null,   -- When specified, kills all spids inside of the database

      @loginame varchar(50) = null,  -- When specified, kills all spids under the login name

      @hostname varchar(50) = null  -- When specified, kills all spids originating from the host machine

    as

    begin

     set nocount on

     select spid, db_name(dbid) as 'db_name', loginame, hostname into #tb1_sysprocesses from master.dbo.sysprocesses (nolock)

     declare @total_logins int, @csr_spid varchar(100)  

     set @total_logins = ( select count(distinct spid) from #tb1_sysprocesses )

     if @dbname is null

     begin

      if @loginame is null

      begin

       if @hostname is null

       begin

        if @total_logins > 0

        begin

        declare csr_spid cursor fast_forward for select distinct spid from #tb1_sysprocesses where loginame <> 'sa' and spid <> @@spid

        open csr_spid 

        fetch next from csr_spid into @csr_spid

        while @@fetch_status = 0

         begin

          set nocount on    

          exec ('kill ' + @csr_spid)

          fetch next from csr_spid into @csr_spid

         end

        close csr_spid

        deallocate csr_spid

        end

       end

       else

       begin

        if @total_logins > 0

        begin

        declare csr_spid cursor fast_forward for select distinct spid from #tb1_sysprocesses where hostname = @hostname  and loginame <> 'sa' and spid <> @@spid

        open csr_spid 

        fetch next from csr_spid into @csr_spid

        while @@fetch_status = 0

         begin

          set nocount on    

          exec ('kill ' + @csr_spid)

          fetch next from csr_spid into @csr_spid

         end

        close csr_spid

        deallocate csr_spid

        end

       end

    --------------------------------------------------

      end    

      else

      begin

        if @total_logins > 0

        begin

        declare csr_spid cursor fast_forward for select distinct spid from #tb1_sysprocesses where loginame = @loginame and  loginame <> 'sa' and spid <> @@spid

        open csr_spid 

        fetch next from csr_spid into @csr_spid

        while @@fetch_status = 0

         begin

          set nocount on    

          exec ('kill ' + @csr_spid)

          fetch next from csr_spid into @csr_spid

         end

        close csr_spid

        deallocate csr_spid

        end

       

      end

    -----------------------

     end

     else

     begin

        if @total_logins > 0

        begin

        declare csr_spid cursor fast_forward for select distinct spid from #tb1_sysprocesses where db_name = @dbname  and loginame <> 'sa' and spid <> @@spid

        open csr_spid 

        fetch next from csr_spid into @csr_spid

        while @@fetch_status = 0

         begin

          set nocount on    

          exec ('kill ' + @csr_spid)

          fetch next from csr_spid into @csr_spid

         end

        close csr_spid

        deallocate csr_spid

        end

     end

     drop table #tb1_sysprocesses

    end

     

    GO

  • Thanks,

    I guess there is a bug in the Wizard that we need to add this additional step to our restore job.  I got a better script for you!

     

    CREATE PROCEDURE sp_dba_Kill_users

    AS

           

    DECLARE @spid smallint

    DECLARE @text varchar(255)

    CREATE TABLE #spid (spid smallint NULL)

    INSERT #spid

    SELECT spid

    FROM sysprocesses

    WHERE dbid IN (SELECT dbid from master.dbo.sysdatabases where name like 'your user DB name')

     DECLARE c_spid CURSOR FOR

      SELECT spid

        FROM #spid

    ORDER BY spid

    FOR READ ONLY

    OPEN c_spid

    FETCH c_spid INTO @spid

    WHILE (@@FETCH_STATUS <> -1)

    BEGIN

            IF (@@FETCH_STATUS <> -2)

            BEGIN

                    SELECT @text = 'KILL ' + CONVERT(varchar(3), @spid)

                    EXECUTE(@text)

            END

            FETCH c_spid INTO @spid

    END

    CLOSE c_spid

    DEALLOCATE c_spid

    DBA/OLAP Developer -
    San Francisco, CA

  • I don't believe it's a 'bug', I believe it's intended to be that way. Normally a 'standby' server isn't being used. It's updated in case the primary server crashes or is otherwise unusable, then the 'standby' server can be used.

    In other words, everyone should be using the primary server. Some should have write access and some should only have read access. The standby server shouldn't be used until it needs to become the primary server.

    -SQLBill

  • Understood, but why do they have "Terminate users" option for the Standby option?

    From many technical paper, it seems to suggest that even Read-only copy, user will get booted off if you have the "terminate users" option selected.

    Check it out!  It's annoying to have to do the extra step and it's confusing as well. If it was never meant to be there, MS should take that option off... so I can add my extra step myself.

    BTW, I find it extremely useful to have a read only copy of production for reporting purposes.

    We are migrating all our reporting jobs to this standby server.  We log ship every 2 hours. None of our reports run longer than 1 hour anyways...

    --JON

    DBA/OLAP Developer -
    San Francisco, CA

  • Personally, I found it a great solution for keeping a reporting server relatively up to date. I just had to kill the reporting (read-only) users on a published timeline. Our Production server was straining under the load (4 way Compaq (at the time) DL760 w/ 4Gb of RAM) of both production and reporting.

    And before someone pipes up and advocates performance tuning, etc. - we were exporting SAP data from a TRU64/Oracle system to a SQL Server for feeding into several other systems, including a data warehouse. Queries were very involved and there wasn't much we could do. So we offloaded ad-hoc reporting people to another SQL Server and they had to put up with getting kicked out twice a day.

    You can do a lot more than warm standby with Log Shippping.

  • I agree with ChrisMoix.  Log shipping is more than having a warm standby... Our read-only reporting jobs are taking away resources on our production box.  Lots of our analyst also use ad-hoc on production during the day.  They do not care about the 2 hours data delay.  So we found that it is extremely useful to have a standby reporting server by implementing log shipping.

    Everything works perfectly. There is just one thing... WHY is the "terminate user" option doesn't work.  I had to add my own extra step in killing users in my restore job...

    JON

    DBA Casecentral.com

    DBA/OLAP Developer -
    San Francisco, CA

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply