Transaction log will not shrink

  • I do a full backup and then a transaction log backup and then use:

    USE DWH

    GO

    DBCC SHRINKFILE (2,TRUNCATEONLY)

    GO

    I checked the sysfiles for the number identifying the log I want to shrink and as you can see from the message below it has picked the correct file (DWH_Log).

    Is there anything else I need to do to get it to shrink. I run a script to check the size and % used of the log files and it shows that the % log file used decreases, but the log file will not shrink.

    This is the message I get:

    Executed as user: *******\Administrator. Cannot shrink log file 2 (DWH_Log) because all logical log files are in use. [SQLSTATE 01000] (Message 9008)  DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] (Message 2528).  The step succeeded.

     

    Any help would be apprciated.

  • Have you tried using DBCC SHRINKFILE with a target size instead of TRUNCATEONLY?

    SQL BAK Explorer - read SQL Server backup file details without SQL Server.
    Supports backup files created with SQL Server 2005 up to SQL Server 2017.

  • Shrinking a file depends on WHERE the free space is actual located. A log file tends to look like this: (code A = active I = inactive)

    A I I A A A I I AA

    in other words, a mixture of active and inactive portions. Shrinking the file only shrinks (removes) the Inactive portions from the END of the log file (in the example above, the end of the file is an active portion).

    Backing up the log causes the Inactive portions to be moved to the end of the file:

    A A A A A A I I I I

    Problem is that it doesn't happen immediately. How long does it take? Depends on your setup. Give it a hour or two then try to shrink the file. Or just set AUTOSHRINK on and the database will figure out when it can be shrunk.

    -SQLBill

  • To take what SQLBill said a step further, you can force the inactive portions to the end.  I don't remember where I found this stored procedure.  It may have been here.  I've made a few modifications to it. 

    Once the procedure is created, execute it within the context of the database you wish to shrink.

    ie -

    USE Log2BigDB

    execute sp_af_force_shrink_log

    Here's the proc:

     

    use master

    go

    if object_id( 'sp_af_force_shrink_log' ) is not null drop proc sp_af_force_shrink_log

    go

    create proc sp_af_force_shrink_log

    /*

    *************************************************************

    Name: sp_af_force_shrink_log

    Description:

      Shrink transaction log of the current database in SQL Server 7.0.

    Switch context to proper db to execute.

    Usage: exec sp_af_force_shrink_log <target_percent>, <target MB>, <iterations>, <backup options>

       exec pubs..sp_af_force_shrink_log

    Author:  Andrew Zanevsky, AZ Databases, Inc., 12/25/1999, v5 - 08/18/2000

       zanevsky@azdatabases.com

    Input Params:

    -------------

    @target_percent  tinyint. default = 0. Target percentage of remaining shrinkable

          space. Defaults to max possible.

    @target_size_MB  int. default = 10. Target size of final log in MB.

    @max_iterations  int. default = 1000. Number of loops (max) to run proc through.

    @backup_log_opt  nvarchar(1000). default = 'with truncate_only'. Backup options.

    Output Params:

    --------------

    Return:

    Results:

    ---------

    Locals:

    --------

    @err  Holds error value

    Modifications:

    --------------

    *************************************************************

    */

        @target_percent tinyint = 0,

        @target_size_MB int = 0,

        @max_iterations int = 1000,

        @backup_log_opt nvarchar(1000) = 'with truncate_only'

    as

    --  test stuff SBP

    -- use CYB2001

    -- GO

    --

    -- declare

    --     @target_percent tinyint ,

    --     @target_size_MB int ,

    --     @max_iterations int ,

    --     @backup_log_opt nvarchar(1000)

    --

    -- set    @target_percent  = 0

    -- set    @target_size_MB  = 0

    -- set    @max_iterations = 1000

    -- set    @backup_log_opt  = 'with truncate_only'

    --

    -- set    @target_size_MB  = 20

    -- end test stuff

    set nocount on

    declare

     @db         sysname,

     @last_row   int,

     @log_size   decimal(15,2),

     @unused1    decimal(15,2),

     @unused     decimal(15,2),

     @shrinkable decimal(15,2),

     @iteration  int,

     @file_max   int,

     @file     int,

     @fileid     varchar(5),

     @prev_max_iterations int,

     @command varchar(500)

    select  @db = db_name(),

            @iteration = 0,

        @prev_max_iterations = 2^31-1 

    -- SQL 7.0 max value for int data type, will be reset within the loop SBP

    create table #loginfo

     (

        id          int identity,

        FileId      int,

        FileSize    numeric(22,0),

        StartOffset numeric(22,0),

        FSeqNo      int,

        Status      int,

        Parity      smallint,

        TimeorLSN  varchar(25))

    create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo, StartOffset )

    create table #logfiles

     (

      id int identity(1,1),

      fileid varchar(5) not null)

    insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles where status & 0x40 = 0x40       

    select @file_max = @@rowcount

    if object_id( 'table_to_force_shrink_log' ) is null

     exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )

    insert  #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, TimeorLSN ) exec ( 'dbcc loginfo' )

    select  @last_row = @@rowcount

    print 'Max iterations = ' + rtrim(cast(@max_iterations as varchar(20)))

    select  @log_size = sum( FileSize ) / 1048576.00,

            @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,

            @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00

    from    #loginfo

    select  @unused1 = @unused -- save for later

    -- changed this so that it will print with rest of output SBP

    print '

    iteration ........... = ' + cast(@iteration as varchar(10))  + '

    log size, MB ........ = ' + cast(@log_size as varchar(10)) + '

    unused log, MB ...... = ' + cast(@unused as varchar(10)) + '

    shrinkable log, MB .. = ' + cast(@shrinkable as varchar(10)) + '

    shrinkable % ........ = ' + cast(convert( decimal(6,2), @shrinkable * 100 / @log_size ) as varchar(10))

    -- original proc did not consider @target_size_MB, only @target_percent. 

    -- modified so that if @target_size_MB is entered, it will take precedence

    -- by calculating a new @target_percent. SBP

    if @target_size_MB > 0

      select @target_percent = (@target_size_MB / @log_size) * 100

    else

      select @target_size_MB = 10

    -- changed @target_percent to + 1, because many times the end result is

    -- slightly larger than the target. SBP

    while @shrinkable * 100 / @log_size > (@target_percent + 1)

      and @iteration < @max_iterations

     begin

        select  @iteration = @iteration + 1 -- this is just a precaution

        exec( 'insert table_to_force_shrink_log select name from sysobjects

               delete table_to_force_shrink_log')

        select @file = 0

        while @file < @file_max begin

            select @file = @file + 1

            select @fileid = fileid from #logfiles where id = @file

    -- added @target_size_MB as a parm in dbcc shrinkfile. also moved into @command. SBP

        select @command = 'dbcc shrinkfile( ' + @fileid + ',' + rtrim(cast(@target_size_MB as varchar(10))) + ')'

        print @command

        exec (@command)

        end

        exec( 'backup log [' + @db + '] ' + @backup_log_opt )

        truncate table #loginfo

        insert  #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, TimeorLSN ) exec ( 'dbcc loginfo' )

        select  @last_row = @@rowcount

    -- The most iterations we really need to do is the number of logical log files,

    -- and this should change if the log is shrinking.  Therefore, reset

    -- @max_iterations within the loop.  SBP

      select @max_iterations = count(*) from #loginfo

    -- If the number of logical log files did not change from last iteration, get out.  SBP

      if @max_iterations = @prev_max_iterations

       select @max_iterations = 0

      else

       select @prev_max_iterations = @max_iterations

      print 'Max iterations = ' + rtrim(cast(@max_iterations as varchar(20)))

        select  @log_size = sum( FileSize ) / 1048576.00,

                @unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,

         @shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00

        from    #loginfo

    print '

    iteration ........... = ' + cast(@iteration as varchar(10))  + '

    log size, MB ........ = ' + cast(@log_size as varchar(10)) + '

    unused log, MB ...... = ' + cast(@unused as varchar(10)) + '

    shrinkable log, MB .. = ' + cast(@shrinkable as varchar(10)) + '

    shrinkable % ........ = ' + cast(convert( decimal(6,2), @shrinkable * 100 / @log_size ) as varchar(10))

    end

    if @unused1 < @unused

    select  'After ' + convert( varchar, @iteration ) +

            ' iterations the unused portion of the log has grown from ' +

            convert( varchar, @unused1 ) + ' MB to ' +

            convert( varchar, @unused ) + ' MB.'

    union all

    select 'Since the remaining unused portion is larger than 10 MB,' where @unused > 10

    union all

    select 'you may try running this procedure again with a higher number of iterations.' where @unused > 10

    union all

    select 'Sometimes the log would not shrink to a size smaller than several Megabytes.' where @unused <= 10

    else

    select  'It took ' + convert( varchar, @iteration ) +

            ' iterations to shrink the unused portion of the log from ' +

            convert( varchar, @unused1 ) + ' MB to ' +

            convert( varchar, @unused ) + ' MB'

    -- cleanup

    drop table #loginfo

    drop table #logfiles

    exec( 'drop table table_to_force_shrink_log' )

    go

    if object_id( 'sp_af_force_shrink_log') Is Null

     select 'sp_af_force_shrink_log Not Created'

    else

     select 'sp_af_force_shrink_log Created'

    go

     

  • My workaround for this is, when I'm sure there are no current users and immediately after a full backup:

    1. Change the recovery model to Simple,

    2. execute a checkpoint

    3. Shrink the TL

    4. Change the recovery model back to what was desired.

    After this procedure, my TL will show a size of .99 MB and used of .34 MB, which is as small as I've seen.

  • Check out these articles from Microsoft...

    INF: How to Shrink the SQL Server 7.0 Transaction Log

    http://support.microsoft.com/default.aspx?scid=kb;en-us;256650&Product=sql

    INF: Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE

    http://support.microsoft.com/default.aspx?scid=kb;en-us;272318&Product=sql

  • Since everyone is giving there scripts .. here are 5 steps that I have used in the past. I don't remember where they came from originally (or I would give the author their due credit), but they work great. Run each one seperately.

    DBCC LOGINFO('YourDatabaseName')

    -- 1

    -- The minimum recovery LSN (log sequence number) in the log has status = 2;

    -- this prevents the log file from reducing in size. The following steps

    -- will prompt a shrink process to proceed somewhat more promptly.

    DBCC SHRINKFILE ('DBLogFile', TRUNCATEONLY)

    BACKUP LOG 'DBName' WITH TRUNCATE_ONLY

    -- 2

    -- Create a dummy table execute dummy inserts to move the active VLF

    CREATE TABLE DummyTable (

    DummyColumn VARCHAR(10)

    , PK INT )

    INSERT DummyTable (PK) VALUES (1)

    GO

    -- 3

    -- Insert into DummyTable to create transactions:

    SET NOCOUNT ON

    DECLARE @vi INT

    SELECT @vi = 0

    WHILE (@vi < 50000)

    BEGIN

    UPDATE DummyTable

    SET DummyColumn = DummyColumn

    WHERE PK = 1 /* Some criteria to restrict to one row. */

    SELECT @vi = @vi + 1

    END

    SET NOCOUNT OFF

    -- 4

    -- Once the log has been truncated, portions near the front of the

    -- log may be cycled and reused. The dummy transactions, allow SQL Server to

    -- reuse the "dead" space at the beginning of the log, instead of "growing"

    -- the log file. The Min LSN with a staus = 2 will then shift to the unused

    -- portions of the log file. The VLFs should then be marked as unused and

    -- be deleted following a DBCC SHRINKFILE & BACKUP LOG. VLF removal is what

    -- actually decreases the physical log file size. Re-execute shrinkfile / truncate:

    DBCC SHRINKFILE ('DBLogFile', TRUNCATEONLY)

    BACKUP LOG 'DBName' WITH TRUNCATE_ONLY

    -- 5

    -- Check the size of the log file to determine if the file has shrunk.

    -- Re-run the script as may be necessary in order to reduce the size of the logfile.

  • I tried Hoo-t suggestion and code and it worked like a champ. Thanks for the info.

  • I also tried this Stored Procedure and it worked spot on.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply