Could not allocate space in database because the 'PRIMARY' filegroup is full

  • Hi,

    I really hope someone can help me because I am stuck with an issue and can't find a real solution for my problem. As a newby in SQLServer 2000 I do not know all the tips and tricks that could help me and also don't know the real reasons why I am having this problem and a way to solve it.

    So first, let me describe my setting.

    I am running SQLServer 2000. It manage a db from an application. Ther are no other db on in expect of course for the master, temp and msdb databases.

    - tempdb

    Data, Space allocated : 33.88 MB, Free : 32.82 Mb

    Log, space allocated : 12.05 MB, Free : 5.9 Mb

    Datafile on D:, Logfile on D:

    Options : data :Automatically grow file 10%, Unrestricted, log: Automatically grow file 10%, Unrestricted,

    Autoupdate stats, autocreate stats

    Recovery model : simple

    - master

    Data, Space allocated : 16.94 MB, Free : 1.56 Mb

    Log, space allocated : 2.24 MB, Free : 1.4 Mb

    Datafile on D:, Logfile on D:

    Options : data :Automatically grow file 10%, Unrestricted, log: Automatically grow file 10%, Unrestricted,

    Autoupdate stats, autocreate stats

    Recovery model : simple

    - msdb

    Data, Space allocated : 13.75 MB, Free : 0.13 Mb

    Log, space allocated : 2.24 MB, Free : 1.1 Mb

    Datafile on D:, Logfile on D:

    Options : data :Automatically grow file 1 Mb, Unrestricted, log: Automatically grow file 1 Mb, Unrestricted,

    Autoupdate stats, autocreate stats, torn page detection, allo cross database ownership chaining

    Recovery model : simple

    - db1

    Data, Space allocated : 148050.1 MB, Free : 37009.38 Mb

    Log, space allocated : 1583.99 MB, Free : 1563.4 Mb

    Datafile on D:, Logfile on D:

    Options : data :Automatically grow file 1000 Mb, restricted 150880 MB, log: Automatically grow file 128 Mb, Unrestricted,

    Autoupdate stats, autocreate stats, torn page detection, allo cross database ownership chaining, autoshrink

    Recovery model : simple

    The disc capacity is :

    C: 10.5 Go (free 4.42 Go)

    D: 279 Go (free 133 Go)

    E: 23.3 Go (free 20.4 Go)

    S: (San disc used for backup)

    Now I have a maintenance plan that do the backup for db1 on S: and also the problematic optimization job :

    - Reoganize data and index pages : Change free spaces per page percentage to : 10%

    The problem is that everytime the optimization job runs, I have got the error below :

    - Error: 1105, Severity: 17, State: 2

    - Could not allocate space for object '(SYSTEM table id: -647922425)' in database 'db1' because the 'PRIMARY' filegroup is full..

    I have checked everywhere I could to find a solution and post on different forum without finding a right way to make my optimization jobs runs correctly ...

    I have grown the db1 data file but it seems that it is not enought because I still have the issue and I am little afraid to grow the size again because the db is huge and also because I don't know if it is going to solve my issue at the end.

    Right now, it seems that my filegroup have 40 Go of free space but that dot not seems to be enough and I don't know why ...

    The optimization job for what I understand shoult recreate the indexes and gie them enough free space to grow the the same object seems to be so huge that there still not enough spaces in the filegroup to make the index recreation/growth successfull...

    Or maybe I am wrong which maybe true (I am a newby in SQLServer 2000...).

    A guy on a forum suggested me to create a new filegroup and move the indexes on this one but I don't know if this is the real solution.

    Is somebody can give me some advices on how I could manage this server the right way and possibly solve my issue?

    I hope you can help me, don't hesitate to ask me about more details if you need more. I really need some advices and maybe a solution.

    Thanks

  • Have you tried removing the size restriction on the db1 database? It may not be the solution but its certainly the first thing I would try.

    It looks like you should have plenty of space regardless but you never know. Since its the only non system DB on the server I probably wouldn't put a size restriction on it anyway unless there is something else on the server you are worried about overstepping.

    Kenneth Fisher

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • Thanks for your reply Kenneth,

    Well, I can set the unrestricted growth option of course but my worry here is that I really can't know how much the database will grow and I really don't want to have my disk (the drive) full because of this.

    This is why I level up the size of the file for the data up to 10Go each time and see if it is working or not.

    I can of course, set it as unrestricted but do you know a way to shrink the file (data) if there is plenty of free space in the filegroup but nor more free space on the disk?

    Also I am wondering if I still use the backup and the optimization operations in the same maintenance plan ... what do you think?

  • Well based on the fact that your DB is set to simple recovery and you have 1.5 gig in the transaction log if you run out of space on the drive you can always do a BACKUP LOG xxx WITH TRUNCATE_ONLY to free yourself up enough space to do a shrink on the data side.

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • Thanks,

    So even if the datafile for the data fill the disk (in the case that I have set the unrestricted option) I still have an option to shrink this file. Good. At least I am less nervous about setting this database to grow unrestricted.

    What about the different maintenance plan? What to you suggest?

    I am sorry to ask too many questions, but well, there is no stupid questions after all and all I want to make things works the way they have to 🙂 Anyway, thanks for your help.

  • Given the amount of space you have free on the D drive your DB would just about have to double in space to cause a problem. I wouldn't worry about that a whole lot. Its fairly unlikly. As far as the maintenance plan goes the only question I have is why you are reorging the data to have 10% free? The only reason I can think of to do that is if you are doing ALOT of inserts. If your data is fairly static then there is no need to do that at all. And if its not static I would change your recovery model to full and expect a heavy growth in your transaction log.

    General rule of thumb is that you would only use simple recovery if you don't particularly care about recovering your data to a specific point in time. IE if all of your data is being loaded from somewhere else, or if the only changes being made to it are in batch jobs at night and you perform a full backup right after that.

    If neither of those is the case I would probably use FULL or at least BULK-LOGGED (you can look at BOL for more information on both).

    If either of those IS the case then you probably don't need to be running that particular part of the maintenance package.

    As far as splitting out your indexes into a seperate filegroup, you can do that, and it does have some benefits as far as performance goes. However if you are a new dba I wouldn't bother unless you suddenly find yourself with some serious performance problems you can't solve any other way.

    Kenneth Fisher

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • I fact the maintenance plan has been created by the application that use this db so I have not created or modifying anything.

    I far as I know, the db is not so static because the application add some data on a daily or hourly basis (security application with sensors) even if the backup or the size of the data does not grow too much.

    So I don't know if the 10% is a good value or not.

    For the recovery model now. We make a backup of this db everyweek and we are backing up only the data and not the log.

    Maybe it is bad but I am not sure of what strategy should I use to make the backup more efficient...

    If I set the recovery model to full then I have to backup the log on a dayly basis I guess and the data on a weekly basis ...

    Also, the log file will grow a lot and I may still have the problem with my optimisation of indexes problem...

    What could be your advice?

  • Well the decision on FULL or SIMPLE recovery is going to be a business one. Right now if you have a crash on Friday you are going to have to recover back to your last backup and possibly lose an entire weeks worth of work. It just depends on what your business can take.

    One possibility is to do a full backup each week, transaction logs every hour (or 2 if there aren't alot of changes) and differential backups every day.

    Like I said its mostly up to the business people and how much loss they are willing to have vs the amount of space the backups will take.

    As far as your optimization plan, it should be fine. 10% is a reasonable number. I would try taking off the size restriction (you have plenty) and see what happens. If your DB starts growing like crazy and all of a sudden goes over 200 gig you probably need to reconsider something. Probably call the support line for the application since you said the application is what created the plan to begin with.

    Kenneth Fisher

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • Romain Pelissier (10/24/2007)


    I fact the maintenance plan has been created by the application that use this db so I have not created or modifying anything.

    I far as I know, the db is not so static because the application add some data on a daily or hourly basis (security application with sensors) even if the backup or the size of the data does not grow too much.

    So I don't know if the 10% is a good value or not.

    For the recovery model now. We make a backup of this db everyweek and we are backing up only the data and not the log.

    Maybe it is bad but I am not sure of what strategy should I use to make the backup more efficient...

    If I set the recovery model to full then I have to backup the log on a dayly basis I guess and the data on a weekly basis ...

    Also, the log file will grow a lot and I may still have the problem with my optimisation of indexes problem...

    What could be your advice?

    My personal opinion is that it's a good practice if you also backup the transaction log so if incase disaster occur you have enough backup to restore but like what kenneth stated the decision depends on your company of how you will handle the situation (with Tlog backups or not). Another thing is that if you are worried about the transaction log backup consuming most of your HD space you can always set a delete of the backup file on your preferred schedule depends on how fast your t-log grows and how fast your HD space decreases.

    "-=Still Learning=-"

    Lester Policarpio

  • Thank for the information.

    To keep this post updated I have finally managed to make my optimization job runs correctly, the main problem was of course the size of the filegroup for the data part of the database. In fact to solve the problem, I have to set the size of the filegroup large enough so the largest table in the db could be copied in the free space of the filegroup. For what I have read and understand the optimization job will copy the table, do some reindex stuff and delete the old table. So if you have a filegroup with 50 mb and 3 mb of free space and that your largest table in it is 20 mb then you should grow your filegroup to : (((20-3)+20+additional free space)+50).

    To know which table are the largest in your db, you can try this :

    http://qa.sqlservercentral.com/articles/Performance+Tuning+and+Scaling/analyzingdatastoragetenshowcontigqueries/1438/[/url]

    and

    EXEC sp_MSforeachtable @command1="EXEC sp_spaceused '?'"

    Then, if you set the auto shrink option, the filegroup should resized itself even if I know that this is not the best option.

    Now, I have to play with the fragmentation of the table...

    I will post soon some infos on this as soon as I have some. To make short, it seems that even if the optimization job was a success I have some fragmentation on the table.

    You can see the fragmention by using this :

    http://www.sql-server-performance.com/articles/dba/dt_dbcc_showcontig_p1.aspx

    For me, for the largest table, I have :

    DBCC SHOWCONTIG scanning 'SensorDataAVP1' table...

    Table: 'SensorDataAVP1' (1842105603); index ID: 1, database ID: 7

    TABLE level scan performed.

    - Pages Scanned................................: 10790591

    - Extents Scanned..............................: 1353057

    - Extent Switches..............................: 1358002

    - Avg. Pages per Extent........................: 8.0

    - Scan Density [Best Count:Actual Count].......: 99.32% [1348824:1358003]

    - Logical Scan Fragmentation ..................: 50.67%

    - Extent Scan Fragmentation ...................: 5.49%

    - Avg. Bytes Free per Page.....................: 757.8

    - Avg. Page Density (full).....................: 90.64%

    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    And for what I have read, Scan Density should be closer to 100% as possible and Logical Scan Fragmentation closer to 0% as possible.

    Any idea on how I can lower the fragmentation on this table?

    Thanks

  • Your Scan Density [Best Count:Actual Count].......: 99.32% [1348824:1358003] is not bad.

    Try to run DBCC INDEXDEFRAG on the indexes of the table and see if you get an improvement. The adavantage of this DBCC is that you can run it without kicking people out of the table.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply