Determine manufactoring date

  • Hello!

    I'm trying to write a Stored Procedure that have to calculate the closest date for manufacturing.

    What I have:

    - The BOM (bill of materials) with the needed quantity for production

    DECLARE @BOM TABLE

    (

    ItemID INT

    ,neededQuantity float

    )

    INSERT INTO @BOM (ItemID, neededQuantity)

    SELECT 1, 10

    UNION ALL SELECT 2, 10

    UNION ALL SELECT 3, 5

    - a calculated table that told me the availability for each component of the BOM, sorted by date. (each row have a plus or minus of the quantity so it can by summarized)

    DECLARE @WhareHouseMovement TABLE

    (

    ItemID INT

    ,Quantity float

    ,Date DATETIME

    )

    INSERT INTO @WhareHouseMovement (ItemID, Quantity, Date)

    SELECT 1, 10, '2015-03-01'

    UNION ALL SELECT 1, -4, '2015-03-06'

    UNION ALL SELECT 1, -4, '2015-03-08'

    UNION ALL SELECT 1, 6, '2015-03-20'

    UNION ALL SELECT 1, 12, '2015-03-24'

    UNION ALL SELECT 2, 6, '2015-03-01'

    UNION ALL SELECT 2, 6, '2015-03-04'

    UNION ALL SELECT 2, -10, '2015-03-15'

    UNION ALL SELECT 2, 8, '2015-03-20'

    UNION ALL SELECT 3, 10, '2015-03-05'

    UNION ALL SELECT 3, -4, '2015-03-15'

    UNION ALL SELECT 3, -2, '2015-03-25'

    UNION ALL SELECT 3, 6, '2015-03-26'

    My question is:

    how do I check when is the closest date to manufacturing? I have to check that the quantity of ALL the components of the BOM is enough to produce the product, but I can't get how to do it.

    If I'm not worng the example should give the result 2015-03-26.

    May you give me a clue?

    thank you!

  • I don't fully understand your question. There is something missing in the details that I can't wrap my head around. That being said it seems that you need a running total here? Here is an excellent article on this topic. http://qa.sqlservercentral.com/articles/T-SQL/68467/[/url]

    If you are on 2012 or later the running total gets a lot easier.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Do you want the manufacture date for each ItemID or max for BOM?

    Are you concerned that an item could be from an older run, your initial 10 only 8 removed leaves 2 but you add 6 the an order for 4. Two could be older than the other two.

  • I try to explain better.

    When I have to make the product I have to have all the needed quantity in my warehouse, ok?

    So, in the sample data I can't produce my product on 2015-03-20 because I only have:

    itemid avail Qty Needed qty

    1 8 10

    2 10 10

    3 4 5

    My need is to know the date for the entire BOM.

    I'm sorry to not be able to explain me properly, thank you for your efforts.

  • faberDB (5/5/2015)


    I try to explain better.

    When I have to make the product I have to have all the needed quantity in my warehouse, ok?

    So, in the sample data I can't produce my product on 2015-03-20 because I only have:

    itemid avail Qty Needed qty

    1 8 10

    2 10 10

    3 4 5

    My need is to know the date for the entire BOM.

    I'm sorry to not be able to explain me properly, thank you for your efforts.

    I think maybe your process is a bit flawed here. Why do you need to determine when you might have been able to produce something in the past? It seems like you should have a production history table that maintains when something is made instead of trying to back into a possible date. Also, do you only ever produce one item at a time? I don't know what is being made but typically a manufacturing process includes a quantity to manufacture. The quantity of course is not needed when making larger items like cars or other items that can't be done in batches.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Thank you, Sean.

    Dates are just examples. Of course nobody needs to know if I can produce something in the past.

    The BOM you see is already exploded, these are the quantity needed to make a certain product in a certain quantity.

    My process could be flawed, of course, but in this case what would you do?

    Let's suppose that I need to make a dish of spaghetti.

    I will need:

    itemid 1: water qty: 10

    itemid 2: spaghetti qty: 10

    itemid 3: salt qty: 5

    quantities will vary in function of the number of dishes I need to cook.

    My question is: if today is february the 1st, when will I be able to cook 4 dishes of spaghetti, considering that my ingredients moves as depicted in the movement table?

  • Since you don't need historical quantities, you can just summarize the current status

    😎

    SELECT

    WM.ItemID

    ,SUM(WM.Quantity) AS CURRENT_QTY

    FROM @WhareHouseMovement WM

    GROUP BY WM.ItemID;

  • I don't agree. In this way I still just have checked one single item, instead of all of three.

    So I have to check every single item and for every given date, until I reach a positive value for each component in the same date.

    But this involves a loop and it will be really bad for performance, because I need to do this calculation for hundreds of items on a big warehouse movement table.

    I'm trying to find a set based way to determine this date.

    I need to know when it will be possible to cook it, so I need to have a date where all the ingredients quantities are enough to cook.

  • Try this.

    DECLARE @BOM TABLE

    (

    ItemID INT

    ,neededQuantity float

    )

    INSERT INTO @BOM (ItemID, neededQuantity)

    SELECT 1, 10

    UNION ALL SELECT 2, 10

    UNION ALL SELECT 3, 5;

    DECLARE @WhareHouseMovement TABLE

    (

    ItemID INT

    ,Quantity float

    ,Date DATETIME

    )

    INSERT INTO @WhareHouseMovement (ItemID, Quantity, Date)

    SELECT 1, 10, '2015-03-01'

    UNION ALL SELECT 1, -4, '2015-03-06'

    UNION ALL SELECT 1, -4, '2015-03-08'

    UNION ALL SELECT 1, 6, '2015-03-20'

    UNION ALL SELECT 1, 12, '2015-03-24'

    UNION ALL SELECT 2, 6, '2015-03-01'

    UNION ALL SELECT 2, 6, '2015-03-04'

    UNION ALL SELECT 2, -10, '2015-03-15'

    UNION ALL SELECT 2, 8, '2015-03-20'

    UNION ALL SELECT 3, 10, '2015-03-05'

    UNION ALL SELECT 3, -4, '2015-03-15'

    UNION ALL SELECT 3, -2, '2015-03-25'

    UNION ALL SELECT 3, 6, '2015-03-26';

    with dts as (

    select date, b.ItemId, b.neededQuantity

    from (select distinct date from @WhareHouseMovement ) x

    cross join @bom b),

    avl as (

    select dts.ItemId, dts.date, neededQuantity - isnull(q,0) ds

    from dts

    outer apply ( select sum(Quantity) q from @WhareHouseMovement s

    where s.ItemId = dts.ItemId and s.Date <= dts.date

    ) x

    )

    select min(date) readytomnfc

    from (

    select date, max(ds) mds

    from avl

    group by date

    having max(ds) <=0 ) z

    Concerning perfomance you may want to replace triangular join which calculates runnig totals with quirky update.

  • Here's another approach using Serg's kindly provided setup data.

    WITH InventoryDates AS

    (

    SELECT [Date]

    FROM @WhareHouseMovement

    WHERE Quantity > 0

    GROUP BY [Date]

    )

    SELECT TOP 1 ReadyToMFG=[Date]

    FROM InventoryDates a

    CROSS JOIN @BOM b

    CROSS APPLY

    (

    SELECT QuantityAvail=SUM(Quantity)

    FROM @WhareHouseMovement c

    WHERE a.[Date] >= c.[Date] AND b.ItemID = c.ItemID

    HAVING SUM(Quantity) >= neededQuantity

    ) c

    GROUP BY [Date]

    HAVING COUNT(*) = (SELECT COUNT(*) FROM @BOM)

    ORDER BY ReadyToMFG;

    It might be a bit easier to understand or maybe not. I guess that depends on you. As mentioned, since you need to do a running sum on the available quantities, avoiding the triangular join with a Quirky Update will most definitely make it faster.

    Edit: I added the WHERE clause in the InventoryDates CTE because it occurred to me that if a stock balance change is negative, it can't possibly be ramped up to support manufacturing start. This may reduce slightly the size of the Cartesian product generated by the CROSS JOIN to @BOM. The "DISTINCT Date" CTE would probably be helped by a covering index on the @WarehouseMovement table (obviously not possible with a table variable), which may also help the CROSS APPLY used to calculate the running total.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • I found this problem interesting so I decided to check into how it actually performs on a larger test harness, so I set up 100 ItemIDs and 100,000 rows of stock movements.

    CREATE TABLE #BOM

    (

    ItemID INT

    ,neededQuantity FLOAT

    ,PRIMARY KEY (ItemID)

    );

    WITH Tally (n) AS

    (

    SELECT TOP 100 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))

    FROM sys.all_columns

    )

    -- 100 BOM items

    INSERT INTO #BOM (ItemID, neededQuantity)

    SELECT n, 1+ABS(CHECKSUM(NEWID()))%10

    FROM Tally;

    CREATE TABLE #WhareHouseMovement

    (

    ItemID INT

    ,Quantity FLOAT

    ,[Date] DATETIME

    ,PRIMARY KEY (ItemID, [Date])

    );

    WITH Tally (n) AS

    (

    SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))

    FROM sys.all_columns

    )

    INSERT INTO #WhareHouseMovement (ItemID, Quantity, [Date])

    SELECT b.n, 5+CHECKSUM(NEWID())%15, DATEADD(day, 4*a.n + 1+ABS(CHECKSUM(NEWID()))%4, 0)

    FROM Tally a

    CROSS APPLY

    (

    SELECT b.n

    FROM Tally b

    WHERE b.n <= 100

    ) b;

    DECLARE @StartDT DATETIME = GETDATE();

    PRINT 'Serg';

    SET STATISTICS TIME ON;

    with dts as (

    select date, b.ItemId, b.neededQuantity

    from (select distinct date from #WhareHouseMovement ) x

    cross join #bom b),

    avl as (

    select dts.ItemId, dts.date, neededQuantity - isnull(q,0) ds

    from dts

    outer apply ( select sum(Quantity) q from #WhareHouseMovement s

    where s.ItemId = dts.ItemId and s.Date <= dts.date

    ) x

    )

    select min(date) readytomnfc

    from (

    select date, max(ds) mds

    from avl

    group by date

    having max(ds) <=0 ) z;

    SET STATISTICS TIME OFF;

    PRINT 'Dwain - no covering index';

    SET STATISTICS TIME ON;

    WITH InventoryDates AS

    (

    SELECT [Date]

    FROM #WhareHouseMovement

    WHERE Quantity > 0

    GROUP BY [Date]

    )

    SELECT TOP 1 ReadyToMFG=[Date]

    FROM InventoryDates a

    CROSS JOIN #BOM b

    CROSS APPLY

    (

    SELECT QuantityAvail=1 --SUM(Quantity)

    FROM #WhareHouseMovement c

    WHERE a.[Date] >= c.[Date] AND b.ItemID = c.ItemID

    HAVING SUM(Quantity) >= neededQuantity

    ) c

    GROUP BY [Date]

    HAVING COUNT(*) = (SELECT COUNT(*) FROM #BOM)

    ORDER BY ReadyToMFG;

    SET STATISTICS TIME OFF;

    -- Create covering index

    CREATE NONCLUSTERED INDEX wm_ix1

    ON #WhareHouseMovement (Quantity) INCLUDE ([Date]);

    PRINT 'Dwain - with covering index';

    SET STATISTICS TIME ON;

    WITH InventoryDates AS

    (

    SELECT [Date]

    FROM #WhareHouseMovement

    WHERE Quantity > 0

    GROUP BY [Date]

    )

    SELECT TOP 1 ReadyToMFG=[Date]

    FROM InventoryDates a

    CROSS JOIN #BOM b

    CROSS APPLY

    (

    -- Returned value is immaterial here

    -- CA only serves to reduce the row count

    SELECT QuantityAvail=1

    FROM #WhareHouseMovement c

    WHERE a.[Date] >= c.[Date] AND b.ItemID = c.ItemID

    HAVING SUM(Quantity) >= neededQuantity

    ) c

    GROUP BY [Date]

    HAVING COUNT(*) = (SELECT COUNT(*) FROM #BOM)

    ORDER BY ReadyToMFG;

    SET STATISTICS TIME OFF;

    GO

    DROP TABLE #BOM;

    DROP TABLE #WhareHouseMovement;

    Normally I would shunt the returned results to local variables, but I didn't this time since each query only returns one row, thus the timings wouldn't be seriously affected. Those are:

    Serg

    SQL Server Execution Times:

    CPU time = 78673 ms, elapsed time = 20608 ms.

    Dwain - no covering index

    SQL Server Execution Times:

    CPU time = 62 ms, elapsed time = 43 ms.

    Dwain - with covering index

    SQL Server Execution Times:

    CPU time = 47 ms, elapsed time = 48 ms.

    Sometimes the covering index helps and sometimes it does not.

    If there is no date when the BOM can be manufactured, Serg's query returns a NULL value, whereas mine returns no rows. This is easiest to simulate by changing one minor thing in the test harness (on the INSERT into #WharehouseMovement):

    ...

    INSERT INTO #WhareHouseMovement (ItemID, Quantity, [Date])

    -- Remove the 5+ on the Quantity

    SELECT b.n, CHECKSUM(NEWID())%15, DATEADD(day, 4*a.n + 1+ABS(CHECKSUM(NEWID()))%4, 0)

    ...

    This rather dramatically changes the timings.

    Serg

    SQL Server Execution Times:

    CPU time = 78189 ms, elapsed time = 20178 ms.

    Dwain - no covering index

    SQL Server Execution Times:

    CPU time = 36441 ms, elapsed time = 36474 ms.

    Dwain - with covering index

    SQL Server Execution Times:

    CPU time = 36442 ms, elapsed time = 36459 ms.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • Good job Dwain.

    It seems TOP(1) date instead of min(date) does the trick. This runs nearly as fast as your query.

    with dts as (

    select date, b.ItemId, b.neededQuantity

    from (select distinct date from #WhareHouseMovement ) x

    cross join #bom b),

    avl as (

    select dts.ItemId, dts.date, neededQuantity - isnull(q,0) ds

    from dts

    outer apply ( select sum(Quantity) q from #WhareHouseMovement s

    where s.ItemId = dts.ItemId and s.Date <= dts.date

    ) x

    )

    select top(1) date

    from avl

    group by date

    having max(ds) <=0

    order by date;

    And those queries' performance definetly depend on data.

    update #BOM set neededQuantity *=1000;

    and run tests again. Triangular join will do what it's supposed to do - eat CPU time. NOT EXISTS will help i think.

    PRINT 'NOT EXISTS'

    SET STATISTICS TIME ON;

    with dts as (

    SELECT [Date]

    FROM #WhareHouseMovement

    WHERE Quantity > 0

    GROUP BY [Date]

    )

    select top(1) date as readytomnfc

    from dts

    where not exists (

    select 1 from #BOM b

    where neededQuantity > ( select sum(Quantity) q from #WhareHouseMovement s

    where s.ItemId = b.ItemId and s.Date <= dts.date)

    )

    order by date

    SET STATISTICS TIME OFF;

    yes, it's perfomance depends on data too but not so badly.

  • Hi Serg,

    Yeah data is really important to this query without a doubt. Try the originals with 100 ItemIDs and 200000 rows of warehouse data and things change drastically again (and not for the better). Seems I may have accidentally hit a sweet spot at 100K rows.

    I still believe that a QU approach is probably better, but I haven't had time to cobble one together today. Perhaps tomorrow.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • WOW!

    Guys you did my day.

    I'm still studying your solutions to fully understand them and asking myself how to get your set-based vies on the data and you already started with performance test!

    I'm working on the quirk update to see how much it impacts the overall performance, I'm really slower than you in coding so probably you will already get the solution when I'll return here :w00t:

    Thank you again, I'm putting myself in the study now, maybe this question could lead to a nice article here. 🙂

  • I was able to stabilize my first solution up to 1M rows - 2000 ItemIDs in the BOM and 5000 dates for each! Talk about a counterintuitive way to do it though.

    CREATE TABLE #BOM

    (

    ItemID INT

    ,neededQuantity FLOAT

    ,PRIMARY KEY (ItemID)

    );

    WITH Tally (n) AS

    (

    SELECT TOP 200 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))

    FROM sys.all_columns

    )

    -- 100 BOM items

    INSERT INTO #BOM (ItemID, neededQuantity)

    SELECT n, 1+ABS(CHECKSUM(NEWID()))%10

    FROM Tally;

    CREATE TABLE #WhareHouseMovement

    (

    ItemID INT

    ,Quantity FLOAT

    ,[Date] DATETIME

    ,QuantityInStock FLOAT

    ,PRIMARY KEY (ItemID, [Date])

    );

    WITH Tally (n) AS

    (

    SELECT TOP 5000 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))

    FROM sys.all_columns a CROSS JOIN sys.all_columns b

    )

    INSERT INTO #WhareHouseMovement (ItemID, Quantity, [Date])

    -- Remove the 5+ on the Quantity

    SELECT b.n, 5+CHECKSUM(NEWID())%15, DATEADD(day, 4*a.n + 1+ABS(CHECKSUM(NEWID()))%4, 0)

    FROM Tally a

    CROSS APPLY

    (

    SELECT b.n

    FROM Tally b

    WHERE b.n <= 200

    ) b;

    PRINT 'Dwain - no covering index';

    SET STATISTICS TIME, IO ON;

    WITH InventoryDates AS

    (

    SELECT [Date]

    FROM #WhareHouseMovement

    WHERE Quantity > 0

    GROUP BY [Date]

    )

    SELECT TOP 1 ReadyToMFG=[Date]

    FROM InventoryDates a

    CROSS JOIN #BOM b

    CROSS APPLY

    (

    -- Returned value is immaterial here

    -- CA only serves to reduce the row count

    SELECT QuantityAvail=1

    FROM #WhareHouseMovement c

    WHERE a.[Date] >= c.[Date] AND b.ItemID = c.ItemID

    HAVING SUM(Quantity) >= neededQuantity

    ) c

    GROUP BY [Date]

    HAVING COUNT(*) = (SELECT COUNT(*) FROM #BOM)

    ORDER BY ReadyToMFG

    OPTION(MAXDOP 1);

    SET STATISTICS TIME, IO OFF;

    -- Create covering index

    CREATE NONCLUSTERED INDEX wm_ix1

    ON #WhareHouseMovement (Quantity) INCLUDE ([Date]);

    --DROP INDEX wm_ix1 ON #WhareHouseMovement;

    PRINT 'Dwain - with covering index';

    SET STATISTICS TIME, IO ON;

    WITH InventoryDates AS

    (

    SELECT [Date]

    FROM #WhareHouseMovement

    WHERE Quantity > 0

    GROUP BY [Date]

    )

    SELECT TOP 1 ReadyToMFG=[Date]

    FROM InventoryDates a

    CROSS JOIN #BOM b

    CROSS APPLY

    (

    -- Returned value is immaterial here

    -- CA only serves to reduce the row count

    SELECT QuantityAvail=1

    FROM #WhareHouseMovement c

    WHERE a.[Date] >= c.[Date] AND b.ItemID = c.ItemID

    HAVING SUM(Quantity) >= neededQuantity

    ) c

    GROUP BY [Date]

    HAVING COUNT(*) = (SELECT COUNT(*) FROM #BOM)

    ORDER BY ReadyToMFG

    OPTION(MAXDOP 1);

    SET STATISTICS TIME, IO OFF;

    GO

    DROP TABLE #BOM;

    DROP TABLE #WhareHouseMovement;

    At this point, the covering index clearly helps out.

    It seems that when the Stock Adjustment table grew to a certain size, SQL wanted to parallelize my query. So I added the OPTION (MAXDOP 1) to avoid parallelization. When it did that, it decided it needed to perform the triangular join to get the running totals across the whole set, which just totally sucked wind.

    Now, this is still going to be a dog if it needs to go a long way down that list of distinct dates in the CTE to find a match, again because of the running total. But if I can find the time tomorrow, I should be able to replace that with a QU that will stabilize it over even that case.

    Didn't include Serg's new queries in the test but it should be straightforward to add them.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply