March 29, 2011 at 9:24 am
One thing that might be worth trying, but it kinda depends on how big a number @DataSet is (I'm guessing it is 10,000 based on the Filter operator).
You appear to be using @DataSet and @m_PageID to pull out 1 page of records using the first subquery. Along the way you pull quite wide record sets for over 300,000 records and then dispose of all but 10,000. Try using a much narrower query to filter into a temp table, which may be able to use relevant indexes. Then join to that temp table to select the wider recordset on only the appropriate rows, touching about 97% less rows in the base tables.
Assuming b.CombinedID is enough to identify the records it would be something like,
; WITH cteFilter AS
(select
((row_number()OVER (ORDER BY b.CombinedID) / @DataSet ) +1) AS UniquePageId,
b.CombinedID
from T_VM_GLACIER_BASE_FEEDS_FB b
INNER JOIN T_VM_GLACIER_SWAPSIDE_FEEDS_FB s ON b.TaskId = s.TaskId
and b.BookName = s.BookName
and b.CombinedID = s.CombinedID
and b.RunVersion = s.RunVersion
INNER JOIN #BaseMaxRunVersion bmrv ON s.TaskId = bmrv.TaskId
and s.BookName = bmrv.BookName
and s.RunVersion = bmrv.RunVersion
where b.TaskId = @TaskId
and b.MajorType <> 'STRSWAP')
SELECT CombinedID
INTO #CombinedIDsWeCareAbout
FROM cteFilter
where UniquePageId = @m_PageId
and then add this in to the first subquery...
from T_VM_GLACIER_BASE_FEEDS_FB b
INNER JOIN #CombinedIDsWeCareAbout c ON b.CombinedID=c.CombinedID
INNER JOIN T_VM_GLACIER_SWAPSIDE_FEEDS_FB s ON b.TaskId = s.TaskId
remove the (now redundant) row_number...
(select
--((row_number()OVER (ORDER BY b.CombinedID) / @DataSet ) +1) AS UniquePageId,
b.CombinedID,
and the redundant where clause...
and H.CombinedID = e.CombinedID
--where UniquePageId = @m_PageId
March 31, 2011 at 1:04 am
Sorry for the late reply guys.
Paul, thanks for this idea. I'm trying your idea and will let you kow after testing.
Viewing 2 posts - 16 through 16 (of 16 total)
You must be logged in to reply to this topic. Login to reply