The strike, the A.I. and the forecast.

On the Monday December 30th, 2019 I had to go to the office. I live 40km south of Paris, France. Due to pension-strikes there were very few trains and I planned to go early by car. Poor decision global warming wise but the only one I could bear with.

Anyway, I left home at 6:30 am (which was too late according to my plan) and set my favorite navigation software to get the best way to the office. The path selected was through all the secondary roads and the planned driving time was 1h44 relative to a 0h39 without any traffic (in the middle of the night).

The software I use is a very common one and is dedicated to navigation. There are embedded software with the phone’s operating system, but I noticed that the software I use gives better results when traffic is heavy. From what I could notice (and I’ll go in some hypothesis from here) there are two factors according to me that allow to get a better travel time computation on this software:

  1. collaborative information capture from drivers.
  2. I have the feeling that the paths computed in this software are using some forecasted traffic impact, meaning that the path must be sliced in small chops and for each small leg, an adjusted speed factor is applied taking into account the time it takes to get there. I think that the other software just computes the travel time just with the traffic situation at the path request.

And this has some bias, because if you are one hour from your office and you start in the morning, you might have a flowing traffic, but you don’t take into account all the commuters starting their car and that will join you.

On top of that there must be some algorithm, I would say machine learning as huge amount of historical data is available and the huge amount of data has to be generated to provide the service.

1h44 was the travel time I expected, since the beginning of the strikes the traffic jams in the Paris area met historical records. I was then happy with my choice; I was convinced to use the correct tool.

But when leaving my city something was wrong, the traffic wasn’t a “1h44 like”, I mean when this occurs, you already have trouble at the beginning of the trip. And there was nobody around, I was almost the only fellow on the road…

When I saw little cars on the motorway, I decided to use the usual way, and it took me a total travel time of 40 minutes, there was very little traffic.

The Percent Error of this forecast was 160%, in other words terribly wrong.

I am not criticizing here, just sharing an example where technology needs improvement.

If we transpose to demand forecasting, machine learning techniques give quite quickly good results with less effort than usual time series techniques. I tried the Prophet procedure on some of testing dataset and the results were promising on average.

What is interesting from my example is that the Navigation software sensed that there was some unusual thing happening in France, you never get 1h44 travel time at 6:30 am it obviously learned from the past weeks. Unfortunately, the algorithm was misled by the holiday period (as you know in France, we are either on holidays or on strike 😊). And due to the strike context, many people just stayed at home between Christmas and New Year’s Eve. Then few took their cars on this Monday morning.

How could this be improved? I managed to take the good decision as I could detect that the “early signs” of the traffic were not the ones that indicated a “black day”. So If I extrapolate to machine learning systems, there must be some feedback loop to sense if the most probable scenario is happening. I mean the software must have triggered an alert by sensing that the connected users were driving much faster than the forecasted speeds on all the roads or even more efficient, on representative sample areas. Here the new normal defined by the machine learning algorithm had to be questioned.

Same goes for the short-term demand forecasts when there is enough volume, some customers or some Items could be sensed to validate trends on a given forecast group.

And this deviation is identified, it should fall on the desk of an NI (Natural Intelligence), on other words a human, a demand planner, a forecaster to request some creative decision based on extra knowledge that is not contained in the data used by the algorithms. Now it is still difficult to identify exceptions happening for machine learning algorithms and human check is still required.

ROADTOSEE will help you in choosing and implementing your machine learning demand planning algorithm. www.roadtosee.com for more information and contact information.

The bucket challenge

A customer asked once to be able to have an estimation of its inventory by the end of the the planning horizon in its advanced planning software. Could he just take at the sum of the projected inventory at the end of the horizon…?
Let’s take a simple example:

The plan is obvious on a horizon of daily buckets: a plan of 90 units every 9 days and the average inventory is 50 units (on a period greater or equal to 9 days).

Using weekly buckets

For good reasons details might not be needed after a while, but what happens if the bucketing horizon includes week buckets after 15 days?

… risky scenario.

let’s have a look on the same case and we take the decision that one plan made in week N is delivered in week N+1… the projected inventory and plan become like this:

we can notice several unwanted effects due to something similar to a moiré pattern.

  • The inventory seems to grow week after week.
  • The production plan and deliveries do not match with the daily plan.
  • The average inventory on 9 days (green dashed line) gets funny.
  • The average inventory over 4 weekly buckets does tend to the correct value (dark blue dashed line).

Conservative scenario:

Also considering that what is ordered in week N will be delivered in week N+1 is not always true, or example what if order is placed on Friday…. it will be delivered on Monday N+2. So in order to be on the safe side, most will take this model. Let’s have a look to the projected inventory evolution:

We can notice a similar Moire pattern, and an increase of the planned quantity in day 16 (as expected as now this plan needs to service an increased leadtime. We also notice that this time the 4 weeks moving average is not close to the expected figure, meaning that we can’t estimate the inventory from the bucket 22.

Another point is that we only have demand data for 5 days in the last bucket (for example if this last bucket ends after the last available forecast…) and this influences clearly the value of the projected inventory.

And what about a monthly bucket?

Now we take monthly buckets to be in line with the S&OP process at the end of the Master Planning horizon. The result will be mostly the same whatever is the leadtime hypothesis… (Production made on N delivered in the N monthly bucket or in the N+1 monthly bucket).

As soon as the bucket duration is bigger than the delivery leadtime, the inventory equation will indicate that the inventory at the end of the bucket is equal to the safety stock: 10 units.
So be very careful when you take inventory figures from a master planning projected inventory. the meaning of this figure might not exactly be what you will have in your warehouse at that moment. There are ways to make this estimation

At RoadToSee we help our customers avoiding these pitfalls, do not hesitate in reaching supply chain planning experts in order to know what can or can’t be done with your planning tool.