Data is everywhere and data analytics is hot. Especially supply chain network design requires a good portion of data to carry out fact-based decision making. Based on the data, a baseline of the current supply chain situation can be created. Data collection and baseline development are often cumbersome process steps due to a lack of structure and data gaps. This data is not only collected for the occasion once every few years, but also needs to be collected from various sources and systems. There are 5 key tips to keep focus on what is important, and making baseline development an easier and less time consuming task.
The brown paper mapping provides the key levers and the degrees of freedom for the supply chain modeling exercise. The objective of the mapping is to get a good overview of the current supply chain. Neglecting this first crucial step leads to additional time and effort because one might be covering the wrong topics and get lost in the woods. With the supply chain experts in one room, it helps to draw a high level supply chain picture. Questions that should be covered during a brown paper session are:
A conceptual model will help in detailing out the calculation logic of the cost model. This provides insights into what drives costs to be different between scenarios. How should manufacturing, warehousing and transportation logic between product groups be differentiated or is one size fits all good enough? Moreover, how can orders be translated into physical shipments and warehouse facility requirements. The brown paper mapping and the calculation logic determine the data priorities.
The following step is to collect the data. When modelling on a frequent basis, a data hub helps to create a large database infrastructure to collect, manage, and store data sets for analysis, sharing and reporting. Normally is should be easy to collect ERP related data including order transactions and inventory snapshots. Typically, companies have more difficulties with providing data around physical shipments, dimensions, costs and warehousing related elements.
The data collection process typically needs a couple of iterations. Data integrity checks help to steer the data collection process. In general, what flows into a facility should flow out. One should make summations and averages of the volumes, profiles and costs per market, per product. These should be validated with the brown paper map. Cost details need to be in line with and match with the P&L? Do not forget to document how the data is collected and which data fixes are made to avoid calculation error next time.
The conceptual model and key levers help to focus and spend time on what is important. If data collection is not feasible or is time consuming, workarounds can be an option. Weak data points can later be covered as sensitivity in the scenario phase. BCI has developed a data maturity framework. Depending on the data availability, ease of data collection and the minimum required data accuracy, there are several ways to best develop proper baseline. Reach out to us for more information.
Above all, ensure that you are in control of the data. Do not lose yourself in details. It is better to spend more time on the logic than in the actual data crunching.
For more information contact Joep Perdaen, Senior Consultant at BCI Global via firstname.lastname@example.org or call +31 24 3790222.