tempdb - unable to allocate page

  • Hi all,

    I'm attempting to run an update against a 1m row table. The update is via a join to a table of 47m rows, using a join based on a text column. There is an index on the 1m row table, but none in the 47m row table.

    Attempting to run the query returns the error: Could not allocate a new page for database 'TEMPDB' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

    I have 35Gb of free and available disk space on the server. tempdb is split over 8 files (8 cores on the server), with autogrowth, as per output from sp_helpfile below:

    name fileid filename filegroup size maxsize growth usage

    tempdev 1 D:\Database\Data\tempdb.mdf PRIMARY 5193728 KB 5193728 KB 1048576 KB data only

    templog 2 L:\Database\Logs\templog.ldf NULL 5193728 KB 20971520 KB 524288 KB log only

    tempdev_1 3 D:\Database\Data\tempdev_1.ndf PRIMARY 5193728 KB 5193728 KB 1048576 KB data only

    tempdev_2 4 D:\Database\Data\tempdev_2.ndf PRIMARY 5193728 KB 5193728 KB 1048576 KB data only

    tempdev_3 5 D:\Database\Data\tempdev_3.ndf PRIMARY 5193728 KB 5193728 KB 1048576 KB data only

    tempdev_4 6 D:\Database\Data\tempdev_4.ndf PRIMARY 5193728 KB 5193728 KB 1048576 KB data only

    tempdev_5 7 D:\Database\Data\tempdev_5.ndf PRIMARY 5193728 KB 5193728 KB 1048576 KB data only

    tempdev_6 8 D:\Database\Data\tempdev_6.ndf PRIMARY 5193728 KB 5193728 KB 1048576 KB data only

    tempdev_7 9 D:\Database\Data\tempdev_7.ndf PRIMARY 5193728 KB 5193728 KB 1048576 KB data only

    I've already run a transaction backup truncate only. sp_spaceused returns:

    database_name database_size unallocated space

    tempdb 45648.00 MB 39539.59 MB

    reserved data index_size unused

    1061280 KB 865584 KB 195432 KB 264 KB

    This seems odd. Could I really be maxing out tempdb with this single query? Or is there something else I should be looking into?

    Regards, Iain

  • Hi there, I was reading a post about this issue some time ago on the 'sqlauthority' blog so I looked it up again. It has some cool tips I saw in the comments about this error exactly. Here are some that might help:

    Make sure tempdb is on another drive, this will help with performance. Also, reducing scan time of tables that aren't frequently used helps reduce time of execution. So I haven't personally dealt with this, but hopefully it can be useful to you.

    Natali 😀

    "Press any key to continue, where's the ANY key?" 😛 Homer Simpson

  • I'd get a look at the execution plan. If the query is against millions of rows without an index, likely it's creating a hash table in order to do the join and that's going to all go into tempdb. Or, if it's doing a merge join, again, without indexes, you're probably getting a data sort, which will occur within the tempdb.

    A JOIN on a text field, by that I assume you mean a large object (LOB), not just a VARCHAR or something, is a problematic approach anyway. You may need to break down the process and do it in chunks.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Hi Grant,

    The join is straight varchar to a substring of a varchar column (apologies for not being explicit about this), that is zero padded inline via replicate. There is a hash join and a sort, but the bit I don't understand is how tempdb can't allocate a new page. With 37Gb of free space in the db itself, plus autogrowth potential, this seems way too much space to be used?

    Interestingly, the query fails almost immediately, rather than after a long period as I might expect if the server was truggling to allocate space.

    I've worked round the problem, but am still curious as to what the problem may have been and of course keen to ensure it doesn't happen again. Any thoughts?

    Thanks, Iain

  • Yes this query could be maxing out tempdb. A lot depends on the size of the varchar field. I have seen queries (similar to what you have described) fill up 200GB drives with smaller tables than you have cited.

    This typically comes back to query design.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • It's absolutely a problematic approach to a JOIN. Functions like that means you can only ever scan the table.

    The reason it might be erroring out so quickly is because it's going to try to allocate that space immediately when it starts to run and if that hash table is as big as i imagine it would be for a query like this...

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Interesting. My curiosity is piqued.

    The workaround is to pre-parse the column into a temp table, including the relevant keys for joins, then index, then join to this instead. This works fine. My initial thinking was that this approach would be more expensive, given two disk writes of 47m rows, but apparently not.

    So what would be in the hash table that wouldn't be in the temp table created above? How is it that this would max out tempdb?

  • Not seeing all the code & structures involved... not sure. Just guesses. Maybe moving to a tempdb creates a more up to date set of stats. Maybe moving to tempdb eliminates the functions so different mechanisms of filtering are used within the hash. I can't see what you can see, so I can't know what you can know.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Which is fair enough 🙂

    Thanks for posting, I'll see if I can't do some research myself and update this thread at a later point.

  • I don't understand is how tempdb can't allocate a new page. With 37Gb of free space in the db itself, plus autogrowth potential, this seems way too much space to be used?

    Hi Iain,

    By the looks of your original output of the files on tempdb you have a maxsize set on each of the files, and each of your data files are already at that maxsize (i.e. 5193728 KB). Even with autogrowth enabled you're not going to be able to increase the size of the data files beyond this value.

    Therefore, you're going to have to increase the maxsize of each data file to allow tempdb to grow.

    Hope this helps,

    Phil

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply